Part of a series of articles about 
Quantum mechanics 

A local hiddenvariable theory in the interpretation of quantum mechanics is a hiddenvariable theory that has the added requirement of being consistent with local realism. It refers to all types of the theory that attempt to account for the probabilistic features of quantum mechanics by the mechanism of underlying inaccessible variables, with the additional requirement from local realism that distant events be independent, ruling out instantaneous (i.e. fasterthanlight) interactions between separate events.
The mathematical implications of a local hiddenvariable theory in regard to the phenomenon of quantum entanglement were explored by physicist John Stewart Bell, who in 1964 proved that broad classes of local hiddenvariable theories cannot reproduce the correlations between measurement outcomes that quantum mechanics predicts. The most notable exception is superdeterminism. Superdeterministic hiddenvariable theories can be local and yet be compatible with observations.
Bell's theorem starts with the implication of the principle of local realism, that separated measurement processes are independent. Based on this premise, the probability of a coincidence between separated measurements of particles with correlated (e.g. identical or opposite) orientation properties can be written:

(1) 
where is the probability of detection of particle with hidden variable by detector , set in direction , and similarly is the probability at detector , set in direction , for particle , sharing the same value of . The source is assumed to produce particles in the state with probability .
Using (1), various Bell inequalities can be derived, which provide limits on the possible behaviour of local hiddenvariable models.
When John Stewart Bell originally derived his inequality, it was in relation to pairs of entangled spin1/2 particles, every one of those emitted being detected. Bell showed that when detectors are rotated with respect to each other, local realist models must yield a correlation curve that is bounded by a straight line between maxima (detectors aligned), whereas the quantum correlation curve is a cosine relationship. The first Bell tests were performed not with spin1/2 particles, but with photons, which have spin 1. A classical local hiddenvariable prediction for photons, based on Maxwell's equations, yields a cosine curve, but of reduced amplitude, such that the curve still lies within the straightline limits specified in the original Bell inequality.
Bell's theorem assumes that measurement settings are completely independent, and not in principle determined by the universe at large. If this assumption were to be incorrect, as proposed in superdeterminism, conclusions drawn from Bell's theorem may be invalidated. The theorem also relies on very efficient and spacelike separated measurements. Such flaws are generally called loopholes. A loophole free experimental verification of a Bell inequality violation was performed in 2015.^{[1]}
Consider, for example, David Bohm's thought experiment, in which a molecule breaks into two atoms with opposite spins.^{[2]} Assume that this spin can be represented by a real vector, pointing in any direction. It will be the "hidden variable" in our model. Taking it to be a unit vector, all possible values of the hidden variable are represented by all points on the surface of a unit sphere.
Suppose that the spin is to be measured in the direction a. Then the natural assumption, given that all atoms are detected, is that all atoms the projection of whose spin in the direction a is positive will be detected as spinup (coded as +1), while all whose projection is negative will be detected as spindown (coded as −1). The surface of the sphere will be divided into two regions, one for +1, one for −1, separated by a great circle in the plane perpendicular to a. Assuming for convenience that a is horizontal, corresponding to the angle a with respect to some suitable reference direction, the dividing circle will be in a vertical plane. So far we have modelled side A of our experiment.
Now to model side B. Assume that b too is horizontal, corresponding to the angle b. There will be second great circle drawn on the same sphere, to one side of which we have +1, the other −1 for particle B. The circle will be again in a vertical plane.
The two circles divide the surface of the sphere into four regions. The type of "coincidence" (++, −−, +− or −+) observed for any given pair of particles is determined by the region within which their hidden variable falls. Assuming the source to be "rotationally invariant" (to produce all possible states λ with equal probability), the probability of a given type of coincidence will clearly be proportional to the corresponding area, and these areas will vary linearly with the angle between a and b. (To see this, think of an orange and its segments. The area of peel corresponding to a number n of segments is roughly proportional to n. More accurately, it is proportional to the angle subtended at the centre.)
The formula (1) above has not been used explicitly – it is hardly relevant when, as here, the situation is fully deterministic. The problem could be reformulated in terms of the functions in the formula, with ρ constant and the probability functions step functions. The principle behind (1) has in fact been used, but purely intuitively.
Thus the local hiddenvariable prediction for the probability of coincidence is proportional to the angle (b − a) between the detector settings. The quantum correlation is defined to be the expectation value of the sum of the individual outcomes, and this is

(2) 
where P_{++} is the probability of a "+" outcome on both sides, P_{+−} that of a "+" on side A, a "−" on side B, etc.
Since each individual term varies linearly with the difference (b − a), so does their sum.
The result is shown in the figure.
In almost all real applications of Bell's inequalities, the particles used have been photons. It is not necessarily assumed that the photons are particlelike. They may be just short pulses of classical light.^{[3]} It is not assumed that every single one is detected. Instead the hidden variable set at the source is taken to determine only the probability of a given outcome, the actual individual outcomes being partly determined by other hidden variables local to the analyser and detector. It is assumed that these other hidden variables are independent on the two sides of the experiment.^{[4]}^{[5]}
In this stochastic model, in contrast to the above deterministic case, we do need equation (1) to find the localrealist prediction for coincidences. It is necessary first to make some assumption regarding the functions and , the usual one being that these are both cosine squares, in line with Malus' law. Assuming the hidden variable to be polarisation direction (parallel on the two sides in real applications, not orthogonal), equation (1) becomes

(3) 
where .
The predicted quantum correlation can be derived from this and is shown in the figure.
In optical tests, incidentally, it is not certain that the quantum correlation is welldefined. Under a classical model of light, a single photon can go partly into the "+" channel, partly into the "−" one, resulting in the possibility of simultaneous detections in both. Though experiments such as by Grangier et al. have shown that this probability is very low,^{[6]} it is not logical to assume that it is actually zero. The definition of quantum correlation is adapted to the idea that outcomes will always be +1, −1 or 0. There is no obvious way of including any other possibility, which is one of the reasons why Clauser and Horne's 1974 Bell test, using singlechannel polarisers, should be used instead of the CHSH Bell test. The CH74 inequality concerns just probabilities of detection, not quantum correlations.
For separable states of two particles, there is a simple hiddenvariable model for any measurements on the two parties. Surprisingly, there are also entangled states for which all von Neumann measurements can be described by a hiddenvariable model.^{[7]} Such states are entangled, but do not violate any Bell inequality. The socalled Werner states are a singleparameter family of states that are invariant under any transformation of the type where is a unitary matrix. For two qubits, they are noisy singlets given as

(4) 
where the singlet is defined as .
R. F. Werner showed that such states allow for a hiddenvariable model for , while they are entangled if . The bound for hiddenvariable models could be improved until .^{[8]} Hiddenvariable models have been constructed for Werner states even if POVM measurements are allowed, not only von Neumann measurements.^{[9]} Hidden variable models were also constructed to noisy maximally entangled states, and even extended to arbitrary pure states mixed with white noise.^{[10]} Beside bipartite systems, there are also results for the multipartite case. A hiddenvariable model for any von Neumann measurements at the parties has been presented for a threequbit quantum state.^{[11]}
By varying the assumed probability and density functions in equation (1), we can arrive at a considerable variety of localrealist predictions.
Previously some new hypotheses were conjectured concerning the role of time in constructing hiddenvariables theory. One approach was suggested by K. Hess and W. Philipp and relies upon possible consequences of time dependencies of hidden variables; this hypothesis has been criticized by R. D. Gill, G. Weihs, A. Zeilinger and M. Żukowski.^{[12]}^{[13]}
If we make realistic (wavebased) assumptions regarding the behaviour of light on encountering polarisers and photodetectors, we find that we are not compelled to accept that the probability of detection will reflect Malus' law exactly.
We might perhaps suppose the polarisers to be perfect, with output intensity of polariser A proportional to cos^{2}(a − λ), but reject the quantummechanical assumption that the function relating this intensity to the probability of detection is a straight line through the origin. Real detectors, after all, have "dark counts" that are there even when the input intensity is zero, and become saturated when the intensity is very high. It is not possible for them to produce outputs in exact proportion to input intensity for all intensities.
By varying our assumptions, it seems possible that the realist prediction could approach the quantummechanical one within the limits of experimental error,^{[14]} though clearly a compromise must be reached. We have to match both the behaviour of the individual light beam on passage through a polariser and the observed coincidence curves. The former would be expected to follow Malus' law fairly closely, though experimental evidence here is not so easy to obtain. We are interested in the behaviour of very weak light and the law may be slightly different from that of stronger light.
{{cite journal}}
: last=
has generic name (help)
{{cite journal}}
: last=
has generic name (help)