9 research outputs found

    Quantum non-locality co-exists with locality

    Full text link
    Quantum non-locality is normally defined via violations of Bell's inequalities that exclude certain classical hidden variable theories from explaining quantum correlations. Another definition of non-locality refers to the wave-function collapse thereby one can prepare a quantum state from arbitrary far away. In both cases one can debate on whether non-locality is a real physical phenomenon, e.g. one can employ formulations of quantum mechanics that does not use collapse, or one can simply refrain from explaining quantum correlations via classical hidden variables. Here we point out that there is a non-local effect within quantum mechanics, i.e. without involving hidden variables or collapse. This effect is seen via imprecise (i.e. interval-valued) joint probability of two observables, which replaces the ill-defined notion of the precise joint probability for non-commuting observables. It is consistent with all requirements for the joint probability, e.g. those for commuting observales. The non-locality amounts to a fact that (in a two-particle system) the joint imprecise probability of non-commuting two-particle observables (i.e. tensor product of single-particle observables) does not factorize into single-particle contributions, even for uncorrelated states of the two-particle system. The factorization is recovered for a less precise (i.e. the one involving a wider interval) joint probability. This approach to non-locality reconciles it with locality, since the latter emerges as a less precise description.Comment: 6 pages, no figure

    Recovery With Incomplete Knowledge: Fundamental Bounds on Real-Time Quantum Memories

    Get PDF
    The recovery of fragile quantum states from decoherence is the basis of building a quantum memory, with applications ranging from quantum communications to quantum computing. Many recovery techniques, such as quantum error correction, rely on the aprioriapriori knowledge of the environment noise parameters to achieve their best performance. However, such parameters are likely to drift in time in the context of implementing long-time quantum memories. This necessitates using a "spectator" system, which estimates the noise parameter in real-time, then feed-forwards the outcome to the recovery protocol as a classical side-information. The memory qubits and the spectator system hence comprise the building blocks for a real-time (i.e. drift-adapting) quantum memory. In this article, I consider spectator-based (incomplete knowledge) recovery protocols as a real-time parameter estimation problem (generally with nuisance parameters present), followed by the application of the "best-guess" recovery map to the memory qubits, as informed by the estimation outcome. I present information-theoretic and metrological bounds on the performance of this protocol, quantified by the diamond distance between the "best-guess" recovery and optimal recovery outcomes, thereby identifying the cost of adaptation in real-time quantum memories. Finally, I provide fundamental bounds for multi-cycle recovery in the form of recurrence inequalities. The latter suggests that incomplete knowledge of the noise could be an advantage, as errors from various cycles can cohere. These results are illustrated for the approximate [4,1] code of the amplitude-damping channel and relations to various fields are discussed

    Adaptive mitigation of time-varying quantum noise

    Full text link
    Current quantum computers suffer from non-stationary noise channels with high error rates, which undermines their reliability and reproducibility. We propose a Bayesian inference-based adaptive algorithm that can learn and mitigate quantum noise in response to changing channel conditions. Our study emphasizes the need for dynamic inference of critical channel parameters to improve program accuracy. We use the Dirichlet distribution to model the stochasticity of the Pauli channel. This allows us to perform Bayesian inference, which can improve the performance of probabilistic error cancellation (PEC) under time-varying noise. Our work demonstrates the importance of characterizing and mitigating temporal variations in quantum noise, which is crucial for developing more accurate and reliable quantum technologies. Our results show that Bayesian PEC can outperform non-adaptive approaches by a factor of 4.5x when measured using Hellinger distance from the ideal distribution.Comment: To appear in IEEE QCE 202

    Thermodynamic Constraints on Quantum Information Gain and Error Correction: A Triple Trade-Off

    Full text link
    Quantum error correction (QEC) is a procedure by which the quantum state of a system is protected against a known type of noise, by preemptively adding redundancy to that state. Such a procedure is commonly used in quantum computing when thermal noise is present. Interestingly, thermal noise has also been known to play a central role in quantum thermodynamics (QTD). This fact hints at the applicability of certain QTD statements in the QEC of thermal noise, which has been discussed previously in the context of Maxwell's demon. In this article, we view QEC as a quantum heat engine with a feedback controller (i.e., a demon). We derive an upper bound on the measurement heat dissipated during the error-identification stage in terms of the Groenewold information gain, thereby providing the latter with a physical meaning also when it is negative. Further, we derive the second law of thermodynamics in the context of this QEC engine, operating with general quantum measurements. Finally, we show that, under a set of physically motivated assumptions, this leads to a fundamental triple trade-off relation, which implies a trade-off between the maximum achievable fidelity of QEC and the super-Carnot efficiency that heat engines with feedback controllers have been known to possess. A similar trade-off relation occurs for the thermodynamic efficiency of the QEC engine and the efficacy of the quantum measurement used for error identification

    Noisy Coherent Population Trapping: Applications to Noise Estimation and Qubit State Preparation

    Full text link
    Coherent population trapping is a well-known quantum phenomenon in a driven Λ\Lambda system, with many applications across quantum optics. However, when a stochastic bath is present in addition to vacuum noise, the observed trapping is no longer perfect. Here we derive a time-convolutionless master equation describing the equilibration of the Λ\Lambda system in the presence of additional temporally correlated classical noise, with an unknown decay parameter. Our simulations show a one-to-one correspondence between the decay parameter and the depth of the characteristic dip in the photoluminescence spectrum, thereby enabling the unknown parameter to be estimated from the observed spectra. We apply our analysis to the problem of qubit state initialization in a Λ\Lambda system via dark states and show how the stochastic bath affects the fidelity of such initialization as a function of the desired dark-state amplitudes. We show that an optimum choice of Rabi frequencies is possible

    Adaptive Quantum Information Processing In Non-Equilibrium Environments

    No full text
    Solid state and condensed matter systems, such as diamond impurities, superconductors, quantum dots, and ion traps, constitute important physical platforms for various applications in quantum information processing (QIP). However, it has consistently been shown that all such modern platforms suffer from non-equilibrium behavior on timescales that are relevant for many important QIP tasks. The causes range from intrinsic non-equilibrium dynamics (e.g. in diamond) to the presence of various impurities with their own internal dynamics (e.g. in superconductors and quantum dots) or variations in the control fields used to stabilize the quantum matter (e.g. in ion traps). When reserving degrees of freedom for QIP in any physical medium, it is therefore important to track and adapt to the non-equilibrium behavior of the rest of relevant degrees of freedom (a.k.a. the environment). In particular, if the environment noise is parameterized by some underlying physical model, then the non-equilibrium behavior will generally yield temporal variations in the noise parameters relevant for QIP. This dissertation proposes a general adaptive refinement to standard QIP (specifically for quantum error correction) by allocating appropriate environment degrees of freedom for use as real-time quantum sensors (a.k.a. spectator systems). This will provide classical side-information about the environment noise parameters in real-time, which is then used to stabilize the performance of the particular QIP task at hand. The main results of this work can be divided into two parts: (1) proposing a realistic physical model for the joint implementation of spectator systems with QIP systems, and (2) studying the fundamental physical limitations of the spectator-based approach, particularly for quantum error correction. This dissertation provides a multi-disciplinary approach to QIP, by attempting to breach the current gap between the mainly mathematical approach to QIP and related physical questions, which ultimately need to be addressed for its realization
    corecore