22,857 research outputs found

    Deccoherent Histories and Measurement of Temporal Correlation Functions for Leggett-Garg Inequalities

    Full text link
    We consider two protocols for the measurement of the temporal correlation functions of a dichotomic variable Q appearing in Leggett-Garg type inequalities. The protocols measure solely whether Q has the same or different sign at the ends of a given time interval. They are inspired, in part, by a decoherent histories analysis of the two-time histories of Q although the protocols are ultimately expressed in macrorealistic form independent of quantum theory. The first type involves an ancilla coupled to the system with two sequential CNOT gates, and the two-time histories of the system are determined in a single final time measurement of the ancilla. It is non-invasive for special choices of initial system states and partially invasive for more general choices. Modified Leggett-Garg type inequalities which accommodate the partial invasiveness are discussed. The quantum picture of the protocol shows that for certain choices of primary system initial state the protocol is undetectable with respect to final system state measurements, although it is still invasive at intermediate times. This invasiveness can be reduced with different choices of ancilla states and the protocol is then similar in flavour to a weak measurement. The second type of protocol is based on the fact that the behaviour of Q over a time interval can be determined from knowledge of the dynamics together with a measurement of certain initial (or final) data. Its quantum version corresponds to the known fact that when sets of histories are decoherent, their probabilities may be expressed in terms of a record projector, hence the two-time histories in which Q has the same or different sign can be determined by a single projective measurement. The resulting protocol resembles the decay-type protocol proposed by Huelga and collaborators (which is non-invasive but requires a stationarity assumption).Comment: 33 pages. Revised appendix on LG inequalities for partially invasive measurements. Accepted for publication in Physical Review

    Arrival Times in Quantum Theory from an Irreversible Detector Model

    Get PDF
    We investigate a detector scheme designed to measure the arrival of a particle at x=0x=0 during a finite time interval. The detector consists of a two state system which undergoes a transition from one state to the other when the particle crosses x=0x=0, and possesses the realistic feature that it is effectively irreversible as a result of being coupled to a large environment. The probabilities for crossing or not crossing x=0x=0 thereby derived coincide with earlier phenomenologically proposed expressions involving a complex potential. The probabilities are compared with similar previously proposed expressions involving sums over paths, and a connection with time operator approaches is also indicated.Comment: 19 pages, plain Tex (Fourth revision). To appear in Prog.Th.Phys. Vol. 102, No.

    The Life-and-Death Journey of the Soul : Interpreting the Myth of Er

    Get PDF
    Publisher PD

    The Leggett-Garg Inequalities and No-Signalling in Time: A Quasi-Probability Approach

    Full text link
    The Leggett-Garg (LG) inequalities were proposed in order to assess whether sets of pairs of sequential measurements on a single quantum system can be consistent with an underlying notion of macrorealism. Here, the LG inequalities are explored using a simple quasi-probability linear in the projection operators to describe the properties of the system at two times. We show that this quasi-probability is measurable, has the same correlation function as the usual two-time measurement probability (for the bivalent variables considered here) and has the key property that the probabilities for the later time are independent of whether an earlier measurement was made, a generalization of the no-signalling in time condition of Kofler and Brukner. We argue that this quasi-probability, appropriately measured, provides a non-invasive measure of macrorealism per se at the two time level. This measure, when combined with the LG inequalities, provides a characterization of macrorealism more detailed than that provided by the LG inequalities alone. When the quasi-probability is non-negative, the LG system has a natural parallel with the EPRB system and Fine's theorem. A simple spin model illustrating key features of the approach is exhibited.Comment: 23 pages. Significant revisions. Change of titl

    An Operator Derivation of the Path Decomposition Expansion

    Full text link
    The path decomposition expansion is a path integral technique for decomposing sums over paths in configuration space into sums over paths in different spatial regions. It leads to a decomposition of the configuration space propagator across arbitrary surfaces in configuration space. It may be used, for example, in calculations of the distribution of first crossing times. The original proof relied heavily on the position representation and in particular on the properties of path integrals. In this paper, an elementary proof of the path decomposition expansion is given using projection operators. This leads to a version of the path decomposition expansion more general than the configuration space form previously given. The path decomposition expansion in momentum space is given as an example.Comment: 9 pages Plain Te

    Incompatible Multiple Consistent Sets of Histories and Measures of Quantumness

    Get PDF
    In the consistent histories (CH) approach to quantum theory probabilities are assigned to histories subject to a consistency condition of negligible interference. The approach has the feature that a given physical situation admits multiple sets of consistent histories that cannot in general be united into a single consistent set, leading to a number of counter-intuitive or contrary properties if propositions from different consistent sets are combined indiscriminately. An alternative viewpoint is proposed in which multiple consistent sets are classified according to whether or not there exists any unifying probability for combinations of incompatible sets which replicates the consistent histories result when restricted to a single consistent set. A number of examples are exhibited in which this classification can be made, in some cases with the assistance of the Bell, CHSH or Leggett-Garg inequalities together with Fine's theorem. When a unifying probability exists logical deductions in different consistent sets can in fact be combined, an extension of the "single framework rule". It is argued that this classification coincides with intuitive notions of the boundary between classical and quantum regimes and in particular, the absence of a unifying probability for certain combinations of consistent sets is regarded as a measure of the "quantumness" of the system. The proposed approach and results are closely related to recent work on the classification of quasi-probabilities and this connection is discussed.Comment: 29 pages. Second revised version with discussion of the sample space and non-uniqueness of the unifying probability and small errors correcte
    corecore