41 research outputs found

    Efficient learning of the structure and parameters of local Pauli noise channels

    Full text link
    The unavoidable presence of noise is a crucial roadblock for the development of large-scale quantum computers and the ability to characterize quantum noise reliably and efficiently with high precision is essential to scale quantum technologies further. Although estimating an arbitrary quantum channel requires exponential resources, it is expected that physically relevant noise has some underlying local structure, for instance that errors across different qubits have a conditional independence structure. Previous works showed how it is possible to estimate Pauli noise channels with an efficient number of samples in a way that is robust to state preparation and measurement errors, albeit departing from a known conditional independence structure. We present a novel approach for learning Pauli noise channels over n qubits that addresses this shortcoming. Unlike previous works that focused on learning coefficients with a known conditional independence structure, our method learns both the coefficients and the underlying structure. We achieve our results by leveraging a groundbreaking result by Bresler for efficiently learning Gibbs measures and obtain an optimal sample complexity of O(log(n)) to learn the unknown structure of the noise acting on n qubits. This information can then be leveraged to obtain a description of the channel that is close in diamond distance from O(poly(n)) samples. Furthermore, our method is efficient both in the number of samples and postprocessing without giving up on other desirable features such as SPAM-robustness, and only requires the implementation of single qubit Cliffords. In light of this, our novel approach enables the large-scale characterization of Pauli noise in quantum devices under minimal experimental requirements and assumptions.Comment: 8 Pages, 1 Figur

    Learning quantum many-body systems from a few copies

    Full text link
    Estimating physical properties of quantum states from measurements is one of the most fundamental tasks in quantum science. In this work, we identify conditions on states under which it is possible to infer the expectation value of all quasi-local observables of a given locality up to a relative error from a number of samples that grows polylogarithmically with system size and polynomially on the locality of the target observables. This constitutes an exponential improvement over known tomography methods in some regimes. We achieve our results by combining one of the most well-established techniques to learn quantum states, the maximum entropy method, with techniques from the emerging fields of quantum optimal transport and classical shadows. We conjecture that our condition holds for all states exhibiting some form of decay of correlations and establish it for several subsets thereof. These include widely studied classes of states such as one-dimensional thermal and gapped ground states and high-temperature Gibbs states of local commuting Hamiltonians on arbitrary hypergraphs. Moreover, we show improvements of the maximum entropy method beyond the sample complexity of independent interest. These include identifying regimes in which it is possible to perform the postprocessing efficiently and novel bounds on the condition number of covariance matrices of many-body states.Comment: 37 pages, 3 figure

    A game of quantum advantage: linking verification and simulation

    Full text link
    We present a formalism that captures the process of proving quantum superiority to skeptics as an interactive game between two agents, supervised by a referee. Bob, is sampling from a classical distribution on a quantum device that is supposed to demonstrate a quantum advantage. The other player, the skeptical Alice, is then allowed to propose mock distributions supposed to reproduce Bob's device's statistics. He then needs to provide witness functions to prove that Alice's proposed mock distributions cannot properly approximate his device. Within this framework, we establish three results. First, for random quantum circuits, Bob being able to efficiently distinguish his distribution from Alice's implies efficient approximate simulation of the distribution. Secondly, finding a polynomial time function to distinguish the output of random circuits from the uniform distribution can also spoof the heavy output generation problem in polynomial time. This pinpoints that exponential resources may be unavoidable for even the most basic verification tasks in the setting of random quantum circuits. Beyond this setting, by employing strong data processing inequalities, our framework allows us to analyse the effect of noise on classical simulability and verification of more general near-term quantum advantage proposals.Comment: 44 pages, to be published in Quantum. New version is substantially extended and contains new connections between previous results and the linear cross entrop

    Lower Bounds on Learning Pauli Channels

    Full text link
    Understanding the noise affecting a quantum device is of fundamental importance for scaling quantum technologies. A particularly important class of noise models is that of Pauli channels, as randomized compiling techniques can effectively bring any quantum channel to this form and are significantly more structured than general quantum channels. In this paper, we show fundamental lower bounds on the sample complexity for learning Pauli channels in diamond norm with unentangled measurements. We consider both adaptive and non-adaptive strategies. In the non-adaptive setting, we show a lower bound of Ω(23nϵ2)\Omega(2^{3n}\epsilon^{-2}) to learn an nn-qubit Pauli channel. In particular, this shows that the recently introduced learning procedure by Flammia and Wallman is essentially optimal. In the adaptive setting, we show a lower bound of Ω(22.5nϵ2)\Omega(2^{2.5n}\epsilon^{-2}) for ϵ=O(2n)\epsilon=\mathcal{O}(2^{-n}), and a lower bound of Ω(22nϵ2)\Omega(2^{2n}\epsilon^{-2} ) for any ϵ>0\epsilon > 0. This last lower bound even applies for arbitrarily many sequential uses of the channel, as long as they are only interspersed with other unital operations

    Efficient classical simulation and benchmarking of quantum processes in the Weyl basis

    Full text link
    One of the crucial steps in building a scalable quantum computer is to identify the noise sources which lead to errors in the process of quantum evolution. Different implementations come with multiple hardware-dependent sources of noise and decoherence making the problem of their detection manyfoldly more complex. We develop a randomized benchmarking algorithm which uses Weyl unitaries to efficiently identify and learn a mixture of error models which occur during the computation. We provide an efficiently computable estimate of the overhead required to compute expectation values on outputs of the noisy circuit relying only on locality of the interactions and no further assumptions on the circuit structure. The overhead decreases with the noise rate and this enables us to compute analytic noise bounds that imply efficient classical simulability. We apply our methods to ansatz circuits that appear in the Variational Quantum Eigensolver and establish an upper bound on classical simulation complexity as a function of noise, identifying regimes when they become classically efficiently simulatable

    Group transference techniques for the estimation of the decoherence times and capacities of quantum Markov semigroups

    Full text link
    Capacities of quantum channels and decoherence times both quantify the extent to which quantum information can withstand degradation by interactions with its environment. However, calculating capacities directly is known to be intractable in general. Much recent work has focused on upper bounding certain capacities in terms of more tractable quantities such as specific norms from operator theory. In the meantime, there has also been substantial recent progress on estimating decoherence times with techniques from analysis and geometry, even though many hard questions remain open. In this article, we introduce a class of continuous-time quantum channels that we called transferred channels, which are built through representation theory from a classical Markov kernel defined on a compact group. We study two subclasses of such kernels: H\"ormander systems on compact Lie-groups and Markov chains on finite groups. Examples of transferred channels include the depolarizing channel, the dephasing channel, and collective decoherence channels acting on dd qubits. Some of the estimates presented are new, such as those for channels that randomly swap subsystems. We then extend tools developed in earlier work by Gao, Junge and LaRacuente to transfer estimates of the classical Markov kernel to the transferred channels and study in this way different non-commutative functional inequalities. The main contribution of this article is the application of this transference principle to the estimation of various capacities as well as estimation of entanglement breaking times, defined as the first time for which the channel becomes entanglement breaking. Moreover, our estimates hold for non-ergodic channels such as the collective decoherence channels, an important scenario that has been overlooked so far because of a lack of techniques.Comment: 35 pages, 2 figures. Close to published versio

    On contraction coefficients, partial orders and approximation of capacities for quantum channels

    Full text link
    The data processing inequality is the most basic requirement for any meaningful measure of information. It essentially states that distinguishability measures between states decrease if we apply a quantum channel. It is the centerpiece of many results in information theory and justifies the operational interpretation of most entropic quantities. In this work, we revisit the notion of contraction coefficients of quantum channels, which provide sharper and specialized versions of the data processing inequality. A concept closely related to data processing are partial orders on quantum channels. We discuss several quantum extensions of the well known less noisy ordering and then relate them to contraction coefficients. We further define approximate versions of the partial orders and show how they can give strengthened and conceptually simple proofs of several results on approximating capacities. Moreover, we investigate the relation to other partial orders in the literature and their properties, particularly with regards to tensorization. We then investigate further properties of contraction coefficients and their relation to other properties of quantum channels, such as hypercontractivity. Next, we extend the framework of contraction coefficients to general f-divergences and prove several structural results. Finally, we consider two important classes of quantum channels, namely Weyl-covariant and bosonic Gaussian channels. For those, we determine new contraction coefficients and relations for various partial orders.Comment: 47 pages, 2 figure

    Provably Efficient Learning of Phases of Matter via Dissipative Evolutions

    Full text link
    The combination of quantum many-body and machine learning techniques has recently proved to be a fertile ground for new developments in quantum computing. Several works have shown that it is possible to classically efficiently predict the expectation values of local observables on all states within a phase of matter using a machine learning algorithm after learning from data obtained from other states in the same phase. However, existing results are restricted to phases of matter such as ground states of gapped Hamiltonians and Gibbs states that exhibit exponential decay of correlations. In this work, we drop this requirement and show how it is possible to learn local expectation values for all states in a phase, where we adopt the Lindbladian phase definition by Coser \& P\'erez-Garc\'ia [Coser \& P\'erez-Garc\'ia, Quantum 3, 174 (2019)], which defines states to be in the same phase if we can drive one to other rapidly with a local Lindbladian. This definition encompasses the better-known Hamiltonian definition of phase of matter for gapped ground state phases, and further applies to any family of states connected by short unitary circuits, as well as non-equilibrium phases of matter, and those stable under external dissipative interactions. Under this definition, we show that N=O(log(n/δ)2polylog(1/ϵ))N = O(\log(n/\delta)2^{polylog(1/\epsilon)}) samples suffice to learn local expectation values within a phase for a system with nn qubits, to error ϵ\epsilon with failure probability δ\delta. This sample complexity is comparable to previous results on learning gapped and thermal phases, and it encompasses previous results of this nature in a unified way. Furthermore, we also show that we can learn families of states which go beyond the Lindbladian definition of phase, and we derive bounds on the sample complexity which are dependent on the mixing time between states under a Lindbladian evolution.Comment: 19 pages, 3 figures, 21 page appendi
    corecore