2,148 research outputs found

    Intermediate determinism in general probabilistic theories

    Full text link
    Quantum theory is indeterministic, but not completely so. When a system is in a pure state there are properties it possesses with certainty, known as actual properties. The actual properties of a quantum system (in a pure state) fully determine the probability of finding the system to have any other property. We call this feature intermediate determinism. In dimensions of at least three, the intermediate determinism of quantum theory is guaranteed by the structure of its lattice of properties. This observation follows from Gleason's theorem, which is why it fails to hold in dimension two. In this work we extend the idea of intermediate determinism from properties to measurements. Under this extension intermediate determinism follows from the structure of quantum effects for separable Hilbert spaces of any dimension, including dimension two. Then, we find necessary and sufficient conditions for a general probabilistic theory to obey intermediate determinism. We show that, although related, both the no-restriction hypothesis and a Gleason-type theorem are neither necessary nor sufficient for intermediate determinism

    Gleason-type theorems and general probabilistic theories

    Get PDF
    The postulates of quantum theory are rather abstract in comparison with those of other physical theories such as special relativity. This thesis considers two tools for investigating this discrepancy and makes a connection between them. The first of these tools, Gleason-type theorems, illustrates the interplay between postulates concerning observables, states and probabilities of measurement outcomes, demonstrating that they need not be entirely independent. Gleason’s original and remarkable result applied to observables described by projection-valued measures; however, the theorem does not hold in dimension two. Busch generalised the idea to observables described by positive operator measures, proving a result which holds for all separable Hilbert spaces. We show that Busch’s assumptions may be weakened without affecting the result. The manner in which we weaken the assumptions brings them closer to Gleason’s original treatment of projection-valued measures. We will then demonstrate the connection between Gleason-type theorems and Cauchy’s functional equation, a connection which yields an alternative proof of Busch’s result. The second tool we consider is the family of general probabilistic theories which offers a means of comparing quantum theory with reasonable alternatives. We identify a general probabilistic theory which reproduces the set of non-local correlations achievable in quantum theory, a property often thought to be particular to quantum theory. Finally, we connect these two tools by determining the class of general probabilistic theories which admit a Gleason-type theorem

    A Gleason-type theorem for qubits based on mixtures of projective measurements

    Get PDF
    We derive Born's rule and the density-operator formalism for quantum systems with Hilbert spaces of dimension two or larger. Our extension of Gleason's theorem only relies upon the consistent assignment of probabilities to the outcomes of projective measurements and their classical mixtures. This assumption is significantly weaker than those required for existing Gleason-type theorems valid in dimension two

    Contextuality and inductive bias in quantum machine learning

    Full text link
    Generalisation in machine learning often relies on the ability to encode structures present in data into an inductive bias of the model class. To understand the power of quantum machine learning, it is therefore crucial to identify the types of data structures that lend themselves naturally to quantum models. In this work we look to quantum contextuality -- a form of nonclassicality with links to computational advantage -- for answers to this question. We introduce a framework for studying contextuality in machine learning, which leads us to a definition of what it means for a learning model to be contextual. From this, we connect a central concept of contextuality, called operational equivalence, to the ability of a model to encode a linearly conserved quantity in its label space. A consequence of this connection is that contextuality is tied to expressivity: contextual model classes that encode the inductive bias are generally more expressive than their noncontextual counterparts. To demonstrate this, we construct an explicit toy learning problem -- based on learning the payoff behaviour of a zero-sum game -- for which this is the case. By leveraging tools from geometric quantum machine learning, we then describe how to construct quantum learning models with the associated inductive bias, and show through our toy problem that they outperform their corresponding classical surrogate models. This suggests that understanding learning problems of this form may lead to useful insights about the power of quantum machine learning.Comment: comments welcom

    Gleason-Type Theorems from Cauchy’s Functional Equation

    Get PDF
    Gleason-type theorems derive the density operator and the Born rule formalism of quantum theory from the measurement postulate, by considering additive functions which assign probabilities to measurement outcomes. Additivity is also the defining property of solutions to Cauchy’s functional equation. This observation suggests an alternative proof of the strongest known Gleason-type theorem, based on techniques used to solve functional equations

    Contextuality in composite systems: the role of entanglement in the Kochen-Specker theorem

    Get PDF
    The Kochen–Specker (KS) theorem reveals the nonclassicality of single quantum systems. In contrast, Bell's theorem and entanglement concern the nonclassicality of composite quantum systems. Accordingly, unlike incompatibility, entanglement and Bell non-locality are not necessary to demonstrate KS-contextuality. However, here we find that for multiqubit systems, entanglement and non-locality are both essential to proofs of the Kochen–Specker theorem. Firstly, we show that unentangled measurements (a strict superset of local measurements) can never yield a logical (state-independent) proof of the KS theorem for multiqubit systems. In particular, unentangled but nonlocal measurements—whose eigenstates exhibit ''nonlocality without entanglement"—are insufficient for such proofs. This also implies that proving Gleason's theorem on a multiqubit system necessarily requires entangled projections, as shown by Wallach [Contemp Math, 305: 291-298 (2002)]. Secondly, we show that a multiqubit state admits a statistical (state-dependent) proof of the KS theorem if and only if it can violate a Bell inequality with projective measurements. We also establish the relationship between entanglement and the theorems of Kochen–Specker and Gleason more generally in multiqudit systems by constructing new examples of KS sets. Finally, we discuss how our results shed new light on the role of multiqubit contextuality as a resource within the paradigm of quantum computation with state injection

    General Probabilistic Theories with a Gleason-type Theorem

    Get PDF
    Gleason-type theorems for quantum theory allow one to recover the quantum state space by assuming that (i) states consistently assign probabilities to measurement outcomes and that (ii) there is a unique state for every such assignment. We identify the class of general probabilistic theories which also admit Gleason-type theorems. It contains theories satisfying the no-restriction hypothesis as well as others which can simulate such an unrestricted theory arbitrarily well when allowing for post-selection on measurement outcomes. Our result also implies that the standard no-restriction hypothesis applied to effects is not equivalent to the dual no-restriction hypothesis applied to states which is found to be less restrictive

    Maximal intrinsic randomness of a quantum state

    Full text link
    One of the most counterintuitive aspects of quantum theory is its claim that there is 'intrinsic' randomness in the physical world. Quantum information science has greatly progressed in the study of intrinsic, or secret, quantum randomness in the past decade. With much emphasis on device-independent and semi-device-independent bounds, one of the most basic questions has escaped attention: how much intrinsic randomness can be extracted from a given state ρ\rho, and what measurements achieve this bound? We answer this question for two different randomness quantifiers: the conditional min-entropy and the conditional von Neumann entropy. For the former, we solve the min-max problem of finding the measurement that minimises the maximal guessing probability of an eavesdropper. The result is that one can guarantee an amount of conditional min-entropy Hmin=log2Pguess(ρ)H^{*}_{\textrm{min}}=-\log_{2}P^{*}_{\textrm{guess}}(\rho) with Pguess(ρ)=1d(trρ)2P^{*}_{\textrm{guess}}(\rho)=\frac{1}{d}\,(\textrm{tr} \sqrt{\rho})^2 by performing suitable projective measurements. For the latter, we find that its maximal value is H=log2dS(ρ)H^{*}= \log_{2}d-S(\rho), with S(ρ)S(\rho) the von Neumann entropy of ρ\rho. Optimal values for HminH^{*}_{\textrm{min}} and HH^{*} are achieved by measuring in any basis that is unbiased to the eigenbasis of ρ\rho, as well as by other less intuitive measurements
    corecore