57 research outputs found
Logical pre- and post-selection paradoxes are proofs of contextuality
If a quantum system is prepared and later post-selected in certain states,
"paradoxical" predictions for intermediate measurements can be obtained. This
is the case both when the intermediate measurement is strong, i.e. a projective
measurement with Luders-von Neumann update rule, or with weak measurements
where they show up in anomalous weak values. Leifer and Spekkens
[quant-ph/0412178] identified a striking class of such paradoxes, known as
logical pre- and post-selection paradoxes, and showed that they are indirectly
connected with contextuality. By analysing the measurement-disturbance required
in models of these phenomena, we find that the strong measurement version of
logical pre- and post-selection paradoxes actually constitute a direct
manifestation of quantum contextuality. The proof hinges on under-appreciated
features of the paradoxes. In particular, we show by example that it is not
possible to prove contextuality without Luders-von Neumann updates for the
intermediate measurements, nonorthogonal pre- and post-selection, and 0/1
probabilities for the intermediate measurements. Since one of us has recently
shown that anomalous weak values are also a direct manifestation of
contextuality [arXiv:1409.1535], we now know that this is true for both
realizations of logical pre- and post-selection paradoxes.Comment: In Proceedings QPL 2015, arXiv:1511.0118
Is a Time Symmetric Interpretation of Quantum Theory Possible Without Retrocausality?
Huw Price has proposed an argument that suggests a time-symmetric ontology for quantum theory must necessarily be retrocausal, i.e. it must involve influences that travel backwards in time. One of Price\u27s assumptions is that the quantum state is a state of reality. However, one of the reasons for exploring retrocausality is that it offers the potential for evading the consequences of no-go theorems, including recent proofs of the reality of the quantum state. Here, we show that this assumption can be replaced by a different assumption, called λ-mediation, that plausibly holds independently of the status of the quantum state. We also reformulate the other assumptions behind the argument to place them in a more general framework and pin down the notion of time symmetry involved more precisely. We show that our assumptions imply a timelike analogue of Bell\u27s local causality criterion and, in doing so, give a new interpretation of timelike violations of Bell inequalities. Namely, they show the impossibility of a (non-retrocausal) time-symmetric ontology
Is a Time Symmetric Interpretation of Quantum Theory Possible Without Retrocausality?
Huw Price has proposed an argument that suggests a time symmetric ontology for quantum theory must necessarily be retrocausal, i.e. it must involve influences that travel backwards in time. One of Price\u27s assumptions is that the quantum state is a state of reality. However, one of the reasons for exploring retrocausality is that it offers the potential for evading the consequences of no-go theorems, including recent proofs of the reality of the quantum state. Here, we show that this assumption can be replaced by a different assumption, called λ-mediation, that plausibly holds independently of the status of the quantum state. We also reformulate the other assumptions behind the argument to place them in a more general framework and pin down the notion of time symmetry involved more precisely. We show that our assumptions imply a timelike analogue of Bell\u27s local causality criterion and, in doing so, give a new interpretation of timelike violations of Bell inequalities. Namely, they show the impossibility of a (non-retrocausal) time symmetric ontology
Quantum lost property: a possible operational meaning for the Hilbert-Schmidt product
Minimum error state discrimination between two mixed states \rho and \sigma
can be aided by the receipt of "classical side information" specifying which
states from some convex decompositions of \rho and \sigma apply in each run. We
quantify this phenomena by the average trace distance, and give lower and upper
bounds on this quantity as functions of \rho and \sigma. The lower bound is
simply the trace distance between \rho and \sigma, trivially seen to be tight.
The upper bound is \sqrt{1 - tr(\rho\sigma)}, and we conjecture that this is
also tight. We reformulate this conjecture in terms of the existence of a pair
of "unbiased decompositions", which may be of independent interest, and prove
it for a few special cases. Finally, we point towards a link with a notion of
non-classicality known as preparation contextuality.Comment: 3 pages, 1 figure. v2: Less typos in text and less punctuation in
titl
Classifying Causal Structures: Ascertaining when Classical Correlations are Constrained by Inequalities
The classical causal relations between a set of variables, some observed and
some latent, can induce both equality constraints (typically conditional
independences) as well as inequality constraints (Instrumental and Bell
inequalities being prototypical examples) on their compatible distribution over
the observed variables. Enumerating a causal structure's implied inequality
constraints is generally far more difficult than enumerating its equalities.
Furthermore, only inequality constraints ever admit violation by quantum
correlations. For both those reasons, it is important to classify causal
scenarios into those which impose inequality constraints versus those which do
not. Here we develop methods for detecting such scenarios by appealing to
d-separation, e-separation, and incompatible supports. Many (perhaps all?)
scenarios with exclusively equality constraints can be detected via a condition
articulated by Henson, Lal and Pusey (HLP). Considering all scenarios with up
to 4 observed variables, which number in the thousands, we are able to resolve
all but three causal scenarios, providing evidence that the HLP condition is,
in fact, exhaustive.Comment: 37+12 pages, 13 figures, 4 table
A structure theorem for generalized-noncontextual ontological models
It is useful to have a criterion for when the predictions of an operational
theory should be considered classically explainable. Here we take the criterion
to be that the theory admits of a generalized-noncontextual ontological model.
Existing works on generalized noncontextuality have focused on experimental
scenarios having a simple structure, typically, prepare-measure scenarios.
Here, we formally extend the framework of ontological models as well as the
principle of generalized noncontextuality to arbitrary compositional scenarios.
We leverage this process-theoretic framework to prove that, under some
reasonable assumptions, every generalized-noncontextual ontological model of a
tomographically local operational theory has a surprisingly rigid and simple
mathematical structure; in short, it corresponds to a frame representation
which is not overcomplete. One consequence of this theorem is that the largest
number of ontic states possible in any such model is given by the dimension of
the associated generalized probabilistic theory. This constraint is useful for
generating noncontextuality no-go theorems as well as techniques for
experimentally certifying contextuality. Along the way, we extend known results
concerning the equivalence of different notions of classicality from
prepare-measure scenarios to arbitrary compositional scenarios. Specifically,
we prove a correspondence between the following three notions of classical
explainability of an operational theory: (i) admitting a noncontextual
ontological model, (ii) admitting of a positive quasiprobability
representation, and (iii) being simplex-embeddable.Comment: lots of diagrams
Negativity and steering : a stronger Peres conjecture
The violation of a Bell inequality certifies the presence of entanglement even if neither party trusts their measurement devices. Recently Moroder et al. [T. Moroder, J.-D. Bancal, Y.-C. Liang, M. Hofmann, and O. Gühne, Phys. Rev. Lett. 111, 030501 (2013)] showed how to make this statement quantitative, using semidefinite programming to calculate how much entanglement is certified by a given violation. Here I adapt their techniques to the case in which Bob's measurement devices are in fact trusted, the setting for Einstein-Podolsky-Rosen steering inequalities. Interestingly, all of the steering inequalities studied turn out to require negativity for their violations. This supports a significant strengthening of Peres's conjecture that negativity is required to violate a bipartite Bell inequality
- …