Bell's theorem proves that quantum physics is incompatible with certain types of local hidden-variable theories. It was introduced by physicist John Stewart Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox", referring to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen used in order to argue that quantum physics is an "incomplete" theory.^{[1]}^{[2]} By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that, in their view, indicated that quantum particles, like electrons and photons, must carry physical properties or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions were due to ignorance of these properties, later termed "hidden variables". Their scenario involves a pair of widely separated physical objects, prepared in such a way that the quantum state of the pair is entangled.
Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated halves of a pair, then the assumption that the outcomes depend upon hidden variables within each half implies a constraint on how the outcomes on the two halves are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they violate one of the assumptions of the theorem or are "nonlocal", somehow associated with both halves of the pair and able to carry influences instantly between them no matter how widely the two halves are separated.^{[3]}^{[4]} As Bell wrote later, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."^{[5]}
Multiple variations on Bell's theorem were proved in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. These have been tested experimentally in physics laboratories many times since 1972. Often, these experiments have had the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". To date, Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems do, in fact, behave.^{[6]}^{[7]}
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved.
In the early 1930s, the philosophical implications of the current interpretations of quantum theory troubled many prominent physicists of the day, including Albert Einstein. In a well-known 1935 paper, Boris Podolsky and co-authors Einstein and Nathan Rosen (collectively "EPR") sought to demonstrate by the EPR paradox that quantum mechanics was incomplete. This provided hope that a more complete (and less troubling) theory might one day be discovered. But that conclusion rested on the seemingly reasonable assumptions of locality and realism (together called "local realism" or "local hidden variables", often interchangeably). In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed. These assumptions were hotly debated in the physics community, notably between Einstein and Niels Bohr.
In his groundbreaking 1964 paper, "On the Einstein Podolsky Rosen paradox",^{[2]}^{[8]} physicist John Stewart Bell presented a further development, based on spin measurements on pairs of entangled electrons, of EPR's hypothetical paradox. Using their reasoning, he said, a choice of measurement setting nearby should not affect the outcome of a measurement far away (and vice versa). After providing a mathematical formulation of locality and realism based on this, he showed specific cases where this would be inconsistent with the predictions of quantum mechanics.
In experimental tests following Bell's example, now using quantum entanglement of photons instead of electrons, John Clauser and Stuart Freedman (1972) and Alain Aspect et al. (1981) demonstrated that the predictions of quantum mechanics are correct in this regard, although relying on additional unverifiable assumptions that open loopholes for local realism. Later experiments worked to close these loopholes.^{[9]}^{[10]}
The theorem is usually proved by consideration of a quantum system of two entangled qubits with the original tests as stated above done on photons. The most common examples concern systems of particles that are entangled in spin or polarization. Quantum mechanics allows predictions of correlations that would be observed if these two particles had their spin or polarization measured in different directions. Bell showed that if a local hidden variable theory holds, then these correlations would have to satisfy certain constraints, called Bell inequalities.
Following the argument in the Einstein–Podolsky–Rosen (EPR) paradox paper (but using the example of spin, as in David Bohm's version of the EPR argument^{[11]}), Bell considered a thought experiment in which there are "a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions."^{[2]} The two particles travel away from each other to two distant locations, at which measurements of spin are performed, along axes that are independently chosen. Each measurement yields a result of either spin-up (+) or spin-down (−); it means, spin in the positive or negative direction of the chosen axis.
The probability of the same result being obtained at the two locations depends on the relative angles at which the two spin measurements are made, and is strictly between zero and one for all relative angles other than perfectly parallel or antiparallel alignments (0° or 180°). Since total angular momentum is conserved, and since the total spin is zero in the singlet state, the probability of the same result with parallel or antiparallel alignment is, respectively, 0 or 1. This last prediction is true classically as well as quantum mechanically.
Bell's theorem is concerned with correlations defined in terms of averages taken over very many trials of the experiment. The correlation of two binary variables is usually defined in quantum physics as the average of the products of the pairs of measurements. Note that this can be different from the usual definition of correlation in statistics, but when each variable individually has a 50% chance of each outcome (as for the examples in this article), it agrees with the usual definition of correlation. In particular, if the outcomes for the two variables are always the same, the correlation is +1; if the outcomes are always opposite, the correlation is −1; and if the outcomes agree 50% of the time, then the correlation is 0. The correlation is related in a simple way to the probability p of equal outcomes; the correlation is 2p-1.
Measuring the spin of these entangled particles along anti-parallel directions (i.e., facing in precisely opposite directions, perhaps offset by some arbitrary distance) the set of all results is perfectly correlated. On the other hand, if measurements are performed along parallel directions (i.e., facing in precisely the same direction, perhaps offset by some arbitrary distance) they always yield opposite results, and the set of measurements shows perfect anti-correlation. This is in accord with the above stated probabilities of measuring the same result in these two cases. Finally, measurement at perpendicular directions has a 50% chance of matching, and the total set of measurements is uncorrelated. These basic cases are illustrated in the table below. Columns should be read as examples of pairs of values that could be recorded by Alice and Bob with time increasing going to the right.
Anti-parallel | Pair | ||||||
---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | ... | n | ||
Alice, 0° | + | − | + | + | ... | − | |
Bob, 180° | + | − | + | + | ... | − | |
Correlation | ( +1 | +1 | +1 | +1 | + ... | +1 ) | / n = +1 |
(100% identical) | |||||||
Parallel | 1 | 2 | 3 | 4 | ... | n | |
Alice, 0° | + | − | − | + | ... | + | |
Bob, 0° or 360° | − | + | + | − | ... | − | |
Correlation | ( −1 | −1 | −1 | −1 | − ... | −1 ) | / n = −1 |
(100% opposite) | |||||||
Orthogonal | 1 | 2 | 3 | 4 | ... | n | |
Alice, 0° | + | − | + | − | ... | − | |
Bob, 90° or 270° | − | − | + | + | ... | − | |
Correlation | ( −1 | +1 | +1 | −1 | ... | +1 ) | / n = 0 |
(50% identical, 50% opposite) |
With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables could agree with/would be consistent with a linear dependence of the correlation in the angle but, according to Bell's inequality (see below), could not agree with the dependence predicted by quantum mechanical theory, namely, that the correlation is the negative cosine of the angle. Experimental results contradict the classical curves and match the curve predicted by quantum mechanics as long as experimental shortcomings are accounted for.^{[3]}^{[12]}
Over the years, Bell's theorem has undergone a wide variety of experimental tests. However, various common deficiencies in the testing of the theorem have been identified, including the detection loophole^{[12]} and the communication loophole.^{[12]} Over the years experiments have been gradually improved to better address these loopholes. In 2015, the first experiment to simultaneously address all of the loopholes was performed.^{[9]}
To date, Bell's theorem is generally regarded as supported by a substantial body of evidence and there are few supporters of local hidden variables, though the theorem is continually the subject of study, criticism, and refinement.^{[13]}^{[14]}
Bell's theorem, derived in his seminal 1964 paper titled "On the Einstein Podolsky Rosen paradox",^{[2]} has been called, on the assumption that the theory is correct, "the most profound in science".^{[15]} Perhaps of equal importance is Bell's deliberate effort to encourage and bring legitimacy to work on the completeness issues, which had fallen into disrepute.^{[16]} Later in his life, Bell expressed his hope that such work would "continue to inspire those who suspect that what is proved by the impossibility proofs is lack of imagination."^{[16]} N. David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".^{[17]}
The title of Bell's seminal article refers to the 1935 paper by Einstein, Podolsky and Rosen^{[18]} that challenged the completeness of quantum mechanics. In his paper, Bell started from the same two assumptions as did EPR, namely (i) reality (that microscopic objects have real properties determining the outcomes of quantum mechanical measurements), and (ii) locality (that reality in one location is not influenced by measurements performed simultaneously at a distant location). Bell was able to derive from those two assumptions an important result, namely Bell's inequality. The theoretical (and later experimental) violation of this inequality implies that at least one of the two assumptions must be false.
In two respects Bell's 1964 paper was a step forward compared to the EPR paper: firstly, it considered more hidden variables than merely the element of physical reality in the EPR paper; and Bell's inequality was, in part, experimentally testable, thus raising the possibility of testing the local realism hypothesis. Limitations on such tests to date are noted below. Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories^{[19]} as well, and it was also realised^{[20]} that the theorem is not so much about hidden variables, as about the outcomes of measurements that could have been taken instead of the one actually taken. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness.
After the EPR paper, quantum mechanics was in an unsatisfactory position: either it was incomplete, in the sense that it failed to account for some elements of physical reality, or it violated the principle of a finite propagation speed of physical effects. In a modified version of the EPR thought experiment, two hypothetical observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It is the conclusion of EPR that once Alice measures spin in one direction (e.g. on the x axis), Bob's measurement in that direction is determined with certainty, as being the opposite outcome to that of Alice, whereas immediately before Alice's measurement Bob's outcome was only statistically determined (i.e., was only a probability, not a certainty); thus, either the spin in each direction is an element of physical reality, or the effects travel from Alice to Bob instantly.
In QM, predictions are formulated in terms of probabilities — for example, the probability that an electron will be detected in a particular place, or the probability that its spin is up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM's weakness is its inability to predict those values precisely. The possibility existed that some unknown theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilities predicted by QM. If such a hidden variables theory exists, then because the hidden variables are not described by QM the latter would be an incomplete theory.
The concept of local realism is formalized to state, and prove, Bell's theorem and generalizations. A common approach is the following:
Perfect anti-correlation would require B(c, λ) = −A(c, λ), c ∈ S^{2}. Implicit in assumption 1) above, the hidden parameter space Λ has a probability measure μ and the expectation of a random variable X on Λ with respect to μ is written
where for accessibility of notation we assume that the probability measure has a probability density p that therefore is nonnegative and integrates to 1. The hidden parameter is often thought of as being associated with the source but it can just as well also contain components associated with the two measurement devices.
Besides local realism, Bell's theorem requires the assumption of statistical independence, also sometimes referred to as measurement independence, free choice, or free will.^{[21]} The assumption of statistical independence can be expressed as
where a and b are the measurement settings and is the probability distribution of the hidden variables. Statistical independence, in words, means that the hidden variables are not correlated with the measurement settings. Theories which violate this assumption are called superdeterministic (see: Superdeterminism). A superdeterministic hidden variables theory will not fulfill the assumptions of Bell's theorem. It can therefore be locally causal and still violate Bell's inequality.
Bell inequalities concern measurements made by observers on pairs of particles that have interacted and then separated. Assuming local realism and no superdeterminism, certain constraints must hold on the relationships between the correlations between subsequent measurements of the particles under various possible measurement settings. Let A and B be as above. Define for the present purposes three correlation functions:
The two-particle spin space is the tensor product of the two-dimensional spin Hilbert spaces of the individual particles. Each individual space is an irreducible representation space of the rotation group SO(3). The product space decomposes as a direct sum of irreducible representations with definite total spins 0 and 1 of dimensions 1 and 3 respectively. Full details may be found in Clebsch—Gordan decomposition. The total spin zero subspace is spanned by the singlet state in the product space, a vector explicitly given by
with adjoint in this representation
The way single particle operators act on the product space is exemplified below by the example at hand; one defines the tensor product of operators, where the factors are single particle operators, thus if Π, Ω are single particle operators,
and
etc., where the superscript in parentheses indicates on which Hilbert space in the tensor product space the action is intended and the action is defined by the right hand side. The singlet state has total spin 0 as may be verified by application of the operator of total spin J · J = (J_{1} + J_{2}) ⋅ (J_{1} + J_{2}) by a calculation similar to that presented below.
The expectation value of the operator
in the singlet state can be calculated straightforwardly. One has, by definition of the Pauli matrices,
Upon left application of this on |A⟩ one obtains
Likewise, application (to the left) of the operator corresponding to b on ⟨A| yields
The inner products on the tensor product space is defined by
Given this, the expectation value reduces to
With this notation, a concise summary of what follows can be made.
The inequality that Bell derived can be written as:^{[2]}
where a, b and c refer to three arbitrary settings of the two analysers. This inequality is however restricted in its application to the rather special case in which the outcomes on both sides of the experiment are always exactly anticorrelated whenever the analysers are parallel. The advantage of restricting attention to this special case is the resulting simplicity of the derivation. In experimental work, the inequality is not very useful because it is hard, if not impossible, to create perfect anti-correlation.
This simple form has an intuitive explanation, however. It is equivalent to the following elementary result from probability theory. Consider three (highly correlated, and possibly biased) coin-flips X, Y, and Z, with the property that:
then X and Z must also yield the same outcome at least 98% of the time. The number of mismatches between X and Y (1/100) plus the number of mismatches between Y and Z (1/100) are together the maximum possible number of mismatches between X and Z (a simple Boole–Fréchet inequality).
Imagine a pair of particles that can be measured at distant locations. Suppose that the measurement devices have settings, which are angles—e.g., the devices measure something called spin in some direction. The experimenter chooses the directions, one for each particle, separately. Suppose the measurement outcome is binary (e.g., spin up, spin down). Suppose the two particles are perfectly anti-correlated—in the sense that whenever both measured in the same direction, one gets identically opposite outcomes, when both measured in opposite directions they always give the same outcome. The only way to imagine how this works is that both particles leave their common source with, somehow, the outcomes they will deliver when measured in any possible direction. (How else could particle 1 know how to deliver the same answer as particle 2 when measured in the same direction? They don't know in advance how they are going to be measured...). The measurement on particle 2 (after switching its sign) can be thought of as telling us what the same measurement on particle 1 would have given.
Start with one setting exactly opposite to the other. All the pairs of particles give the same outcome (each pair is either both spin up or both spin down). Now shift Alice's setting by one degree relative to Bob's. They are now one degree off being exactly opposite to one another. A small fraction of the pairs, say f, now give different outcomes. If instead we had left Alice's setting unchanged but shifted Bob's by one degree (in the opposite direction), then again a fraction f of the pairs of particles turns out to give different outcomes. Finally consider what happens when both shifts are implemented at the same time: the two settings are now exactly two degrees away from being opposite to one another. By the mismatch argument, the chance of a mismatch at two degrees can't be more than twice the chance of a mismatch at one degree: it cannot be more than 2f.^{[citation needed]}
Compare this with the predictions from quantum mechanics for the singlet state. For a small angle θ, measured in radians, the chance of a different outcome is approximately as explained by small-angle approximation. At two times this small angle, the chance of a mismatch is therefore about 4 times larger, since . But we just argued that it cannot be more than 2 times as large.
This intuitive formulation is due to David Mermin. The small-angle limit is discussed in Bell's original article, and therefore goes right back to the origin of the Bell inequalities.^{[citation needed]}
Generalizing Bell's original inequality,^{[2]} John Clauser, Michael Horne, Abner Shimony and R. A. Holt introduced the CHSH inequality,^{[22]} which puts classical limits on the set of four correlations in Alice and Bob's experiment, without any assumption of perfect correlations (or anti-correlations) at equal settings
Making the special choice , denoting , and assuming perfect anti-correlation at equal settings, perfect correlation at opposite settings, therefore and , the CHSH inequality reduces to the original Bell inequality. Nowadays, (1) is also often simply called "the Bell inequality", but sometimes more completely "the Bell-CHSH inequality".
With abbreviated notation
the CHSH inequality can be derived as follows. Each of the four quantities is and each depends on . It follows that for any , one of and is zero, and the other is . From this it follows that
and therefore
At the heart of this derivation is a simple algebraic inequality concerning four variables, , which take the values only:
The CHSH inequality is seen to depend only on the following three key features of a local hidden variables theory: (1) realism: alongside of the outcomes of actually performed measurements, the outcomes of potentially performed measurements also exist at the same time; (2) locality, the outcomes of measurements on Alice's particle don't depend on which measurement Bob chooses to perform on the other particle; (3) freedom: Alice and Bob can indeed choose freely which measurements to perform.
The realism assumption is actually somewhat idealistic, and Bell's theorem only proves non-locality with respect to variables that only exist for metaphysical reasons^{[citation needed]}. However, before the discovery of quantum mechanics, both realism and locality were completely uncontroversial features of physical theories.
The measurements performed by Alice and Bob are spin measurements on electrons. Alice can choose between two detector settings labeled and ; these settings correspond to measurement of spin along the or the axis. Bob can choose between two detector settings labeled and ; these correspond to measurement of spin along the or axis, where the coordinate system is rotated 135° relative to the coordinate system. The spin observables are represented by the 2 × 2 self-adjoint matrices:
These are the Pauli spin matrices, which are known to have eigenvalues equal to . As is customary, we will use bra–ket notation to denote the eigenvectors of as , where
According to quantum mechanics, the choice of measurements is encoded into the choice of Hermitian operators applied to this state. In particular, consider the following operators:
where represent two measurement choices of Alice, and two measurement choices of Bob.
To obtain the expectation value given by a given measurement choice of Alice and Bob, one has to compute the expectation value of the corresponding pair of operators (for example, if the inputs are chosen to be ) over the shared state .
For example, the expectation value corresponding to Alice choosing the measurement setting and Bob choosing the measurement setting is computed as
Bell's Theorem: If the quantum mechanical formalism is correct, then the system consisting of a pair of entangled electrons cannot satisfy the principle of local realism. Note that is indeed the upper bound for quantum mechanics called Tsirelson's bound. The operators giving this maximal value are always isomorphic to the Pauli matrices.^{[23]}
Experimental tests can determine whether the Bell inequalities required by local realism hold up to the empirical evidence.
Actually, most experiments have been performed using polarization of photons rather than spin of electrons (or other spin-half particles). The quantum state of the pair of entangled photons is not the singlet state, and the correspondence between angles and outcomes is different from that in the spin-half set-up. The polarization of a photon is measured in a pair of perpendicular directions. Relative to a given orientation, polarization is either vertical (denoted by V or by +) or horizontal (denoted by H or by -). The photon pairs are generated in the quantum state
where and denotes the state of a single vertically or horizontally polarized photon, respectively (relative to a fixed and common reference direction for both particles).
When the polarization of both photons is measured in the same direction, both give the same outcome: perfect correlation. When measured at directions making an angle 45° with one another, the outcomes are completely random (uncorrelated). Measuring at directions at 90° to one another, the two are perfectly anti-correlated. In general, when the polarizers are at an angle θ to one another, the correlation is cos(2θ). So relative to the correlation function for the singlet state of spin half particles, we have a positive rather than a negative cosine function, and angles are halved: the correlation is periodic with period π instead of 2π.
Bell's inequalities are tested by "coincidence counts" from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter.
Bell test experiments to date overwhelmingly violate Bell's inequality.
The fair sampling problem was faced openly in the 1970s. In early designs of their 1973 experiment, Freedman and Clauser^{[24]} used fair sampling in the form of the Clauser–Horne–Shimony–Holt (CHSH^{[22]}) hypothesis. However, shortly afterwards Clauser and Horne^{[19]} made the important distinction between inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities. Testing an IBI requires that we compare certain coincidence rates in two separated detectors with the singles rates of the two detectors. Nobody needed to perform the experiment, because singles rates with all detectors in the 1970s were at least ten times all the coincidence rates. So, taking into account this low detector efficiency, the QM prediction actually satisfied the IBI. To arrive at an experimental design in which the QM prediction violates IBI we require detectors whose efficiency exceeds 82.8% for singlet states,^{[25]} but have very low dark rate and short dead and resolving times. However, Eberhard discovered that with a variant of the Clauser-Horne inequality, and using less than maximally entangled states, a detection efficiency of only 66.67% was required.^{[26]} This was achieved in 2015 by two successful "loophole-free" Bell-type experiments, in Vienna ^{[27]} and at NIST in Boulder, Colorado.^{[28]}
Because, at that time, even the best detectors didn't detect a large fraction of all photons, Clauser and Horne^{[19]} recognized that testing Bell's inequality required some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):
A light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.
Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.
The experiment was performed by Freedman and Clauser,^{[24]} who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model.
While early experiments used atomic cascades, later experiments have used parametric down-conversion, following a suggestion by Reid and Walls,^{[29]} giving improved generation and detection properties. As a result, recent experiments with photons no longer have to suffer from the detection loophole. This made the photon the first experimental system for which all main experimental loopholes were surmounted, although at first only in separate experiments. From 2015, experimentalists were able to surmount all the main experimental loopholes except for the free will loophole simultaneously; see Bell test experiments. The free will loophole cannot be closed with Bell-type tests.
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg. It is one of the oldest of numerous proposed interpretations of quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught.^{[30]} There is no definitive historical statement of what is the Copenhagen interpretation. In particular, there were fundamental disagreements between the views of Bohr and Heisenberg.^{[31]}^{[32]}^{[33]} Some basic principles generally accepted as part of the Copenhagen collection include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule,^{[34]} and the complementarity principle: certain properties cannot be jointly defined for the same system at the same time. In order to talk about a specific property of a system, that system must be considered within the context of a specific laboratory arrangement. Observable quantities corresponding to mutually exclusive laboratory arrangements cannot be predicted together, but considering multiple such mutually exclusive experiments is necessary to characterize a system.^{[31]} Bohr himself used complementarity to argue that the EPR "paradox" was fallacious. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, he argued, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid.^{[35]}^{: 194–197 } Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."^{[36]}
Copenhagen-type interpretations generally take the violation of Bell inequalities as grounds to reject what Bell called "realism", which is not necessarily the same as abandoning realism in a broader philosophical sense.^{[37]}^{[38]} For example, Roland Omnès argues for the rejection of hidden variables and concludes that "quantum mechanics is probably as realistic as any theory of its scope and maturity ever will be."^{[39]}^{: 531 } This is also the route taken by interpretations that descend from the Copenhagen tradition, such as consistent histories (often advertised as "Copenhagen done right"),^{[40]} as well as QBism.^{[41]}
The Many-Worlds interpretation is local and deterministic, as it consists of the unitary part of quantum mechanics without collapse. It can generate correlations that violate a Bell inequality because it doesn't satisfy the implicit assumption that Bell made that measurements have a single outcome. In fact, Bell's theorem can be proven in the Many-Worlds framework from the assumption that a measurement has a single outcome. Therefore a violation of a Bell inequality can be interpreted as a demonstration that measurements have multiple outcomes.^{[42]}
The explanation it provides for the Bell correlations is that when Alice and Bob make their measurements, they split into local branches. From the point of view of each copy of Alice, there are multiple copies of Bob experiencing different results, so Bob cannot have a definite result, and the same is true from the point of view of each copy of Bob. They will obtain a mutually well-defined result only when their future light cones overlap. At this point we can say that the Bell correlation starts existing, but it was produced by a purely local mechanism. Therefore the violation of a Bell inequality cannot be interpreted as a proof of non-locality.^{[43]}
Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself.^{[44]}
The transactional interpretation, which postulates waves traveling both backwards and forwards in time, is likewise non-local.^{[45]}
Bell himself summarized one of the possible ways to address the theorem, superdeterminism, in a 1985 BBC Radio interview:
There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the 'decision' by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster-than-light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already 'knows' what that measurement, and its outcome, will be.^{[46]}
A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that the aforementioned superdeterminism loophole cannot be dismissed.^{[47]} For a hidden-variable theory, if Bell's conditions are correct, the results that agree with quantum mechanical theory either (a) appear to indicate superluminal (faster-than-light) effects, in contradiction to relativistic physics, or they (b) require superdeterminism. (Or, technically, a combination of both, though no known example of this exists.)
There have also been repeated claims that Bell's arguments are irrelevant because they depend on hidden assumptions that, in fact, are questionable. For example, E. T. Jaynes^{[48]} argued in 1989 that there are two hidden assumptions in Bell's theorem that limit its generality. According to Jaynes:
Richard D. Gill claimed that Jaynes misunderstood Bell's analysis. Gill points out that in the same conference volume in which Jaynes argues against Bell, Jaynes confesses to being extremely impressed by a short proof by Steve Gull presented at the same conference, that the singlet correlations could not be reproduced by a computer simulation of a local hidden variables theory.^{[49]} According to Jaynes (writing nearly 30 years after Bell's landmark contributions), it would probably take us another 30 years to fully appreciate Gull's stunning result.
In 2006 a flurry of activity about implications for determinism arose with John Horton Conway and Simon B. Kochen's free will theorem,^{[50]} which stated "the response of a spin 1 particle to a triple experiment is free—that is to say, is not a function of properties of that part of the universe that is earlier than this response with respect to any given inertial frame."^{[51]} This theorem raised awareness of a tension between determinism fully governing an experiment (on the one hand) and Alice and Bob being free to choose any settings they like for their observations (on the other).^{[52]}^{[53]} The philosopher David Hodgson supports this theorem as showing that determinism is unscientific, thereby leaving the door open for our own free will.^{[54]}
The violations of Bell's inequalities, due to quantum entanglement, provide near definitive demonstrations of something that was already strongly suspected: that quantum physics cannot be represented by any version of the classical picture of physics.^{[55]} Some earlier elements that had seemed incompatible with classical pictures included complementarity and wavefunction collapse. The Bell violations show that no resolution of such issues can avoid the ultimate strangeness of quantum behavior.^{[56]}
The EPR paper "pinpointed" the unusual properties of the entangled states, e.g. the above-mentioned singlet state, which is the foundation for present-day applications of quantum physics, such as quantum cryptography; one application involves the measurement of quantum entanglement as a physical source of bits for Rabin's oblivious transfer protocol. This non-locality was originally supposed to be illusory, because the standard interpretation could easily do away with action-at-a-distance by simply assigning to each particle definite spin-states for all possible spin directions. The EPR argument was: therefore these definite states exist, therefore quantum theory is incomplete in the EPR sense, since they do not appear in the theory. Bell's theorem showed that the "entangledness" prediction of quantum mechanics has a degree of non-locality that cannot be explained away by any classical theory of local hidden variables.
What is powerful about Bell's theorem is that it doesn't refer to any particular theory of local hidden variables. It shows that nature violates the most general assumptions behind classical pictures, not just details of some particular models. No combination of local deterministic and local random hidden variables can reproduce the phenomena predicted by quantum mechanics and repeatedly observed in experiments.^{[57]}
The following are intended for general audiences.
Wikibooks has a book on the topic of: Quantum_Mechanics |
Wikiversity has learning resources about Bell's_theorem |
Wikimedia Commons has media related to Bell's theorem. |