12 research outputs found

    Sample-optimal classical shadows for pure states

    Full text link
    We consider the classical shadows task for pure states in the setting of both joint and independent measurements. The task is to measure few copies of an unknown pure state Ļ\rho in order to learn a classical description which suffices to later estimate expectation values of observables. Specifically, the goal is to approximate Tr(OĻ)\mathrm{Tr}(O \rho) for any Hermitian observable OO to within additive error Ļµ\epsilon provided Tr(O2)ā‰¤B\mathrm{Tr}(O^2)\leq B and āˆ„Oāˆ„=1\lVert O \rVert = 1. Our main result applies to the joint measurement setting, where we show Ī˜~(BĻµāˆ’1+Ļµāˆ’2)\tilde{\Theta}(\sqrt{B}\epsilon^{-1} + \epsilon^{-2}) samples of Ļ\rho are necessary and sufficient to succeed with high probability. The upper bound is a quadratic improvement on the previous best sample complexity known for this problem. For the lower bound, we see that the bottleneck is not how fast we can learn the state but rather how much any classical description of Ļ\rho can be compressed for observable estimation. In the independent measurement setting, we show that O(BdĻµāˆ’1+Ļµāˆ’2)\mathcal O(\sqrt{Bd} \epsilon^{-1} + \epsilon^{-2}) samples suffice. Notably, this implies that the random Clifford measurements algorithm of Huang, Kueng, and Preskill, which is sample-optimal for mixed states, is not optimal for pure states. Interestingly, our result also uses the same random Clifford measurements but employs a different estimator.Comment: 28 page

    Non-Exponential Behaviour in Logical Randomized Benchmarking

    Full text link
    We construct a gate and time-independent noise model that results in the output of a logical randomized benchmarking protocol oscillating rather than decaying exponentially. To illustrate our idea, we first construct an example in standard randomized benchmarking where we assume the existence of ``hidden'' qubits, permitting a choice of representation of the Clifford group that contains multiplicities. We use the multiplicities to, with each gate application, update a hidden memory of the gate history that we use to circumvent theorems which guarantee the output decays exponentially. In our focal setting of logical randomized benchmarking, we show that the presence of machinery associated with the implementation of quantum error correction can facilitate non-exponential decay. Since, in logical randomized benchmarking, the role of the hidden qubits is assigned to the syndrome qubits used in error correction and these are strongly coupled to the logical qubits via a decoder.Comment: 8 pages + 3 pages of appendices, 7 figure

    Fast estimation of outcome probabilities for quantum circuits

    Full text link
    We present two classical algorithms for the simulation of universal quantum circuits on nn qubits constructed from cc instances of Clifford gates and tt arbitrary-angle ZZ-rotation gates such as TT gates. Our algorithms complement each other by performing best in different parameter regimes. The Estimate\tt{Estimate} algorithm produces an additive precision estimate of the Born rule probability of a chosen measurement outcome with the only source of run-time inefficiency being a linear dependence on the stabilizer extent (which scales like ā‰ˆ1.17t\approx 1.17^t for TT gates). Our algorithm is state-of-the-art for this task: as an example, in approximately 1313 hours (on a standard desktop computer), we estimated the Born rule probability to within an additive error of 0.030.03, for a 5050 qubit, 6060 non-Clifford gate quantum circuit with more than 20002000 Clifford gates. Our second algorithm, Compute\tt{Compute}, calculates the probability of a chosen measurement outcome to machine precision with run-time O(2tāˆ’rt)O(2^{t-r} t) where rr is an efficiently computable, circuit-specific quantity. With high probability, rr is very close to minā”{t,nāˆ’w}\min \{t,n-w\} for random circuits with many Clifford gates, where ww is the number of measured qubits. Compute\tt{Compute} can be effective in surprisingly challenging parameter regimes, e.g., we can randomly sample Clifford+TT circuits with n=55n=55, w=5w=5, c=105c=10^5 and t=80t=80 TT gates, and then compute the Born rule probability with a run-time consistently less than 1010 minutes using a single core of a standard desktop computer. We provide a C+Python implementation of our algorithms.Comment: 25+14 pages, 6 figures. Version 2 contains a minor correction to the scaling of Theorem 3, a small improvement to the scaling of Theorem 2 and various other improvements. Comments welcom

    Shallow shadows: Expectation estimation using low-depth random Clifford circuits

    Full text link
    We provide practical and powerful schemes for learning many properties of an unknown n-qubit quantum state using a sparing number of copies of the state. Specifically, we present a depth-modulated randomized measurement scheme that interpolates between two known classical shadows schemes based on random Pauli measurements and random Clifford measurements. These can be seen within our scheme as the special cases of zero and infinite depth, respectively. We focus on the regime where depth scales logarithmically in n and provide evidence that this retains the desirable properties of both extremal schemes whilst, in contrast to the random Clifford scheme, also being experimentally feasible. We present methods for two key tasks; estimating expectation values of certain observables from generated classical shadows and, computing upper bounds on the depth-modulated shadow norm, thus providing rigorous guarantees on the accuracy of the output estimates. We consider observables that can be written as a linear combination of poly(n) Paulis and observables that can be written as a low bond dimension matrix product operator. For the former class of observables both tasks are solved efficiently in n. For the latter class, we do not guarantee efficiency but present a method that works in practice; by variationally computing a heralded approximate inverses of a tensor network that can then be used for efficiently executing both these tasks.Comment: 22 pages, 12 figures. Version 2: new MPS variational inversion algorithm and new numeric

    Quantifying quantum speedups: improved classical simulation from tighter magic monotones

    Get PDF
    Consumption of magic states promotes the stabilizer model of computation to universal quantum computation. Here, we propose three different classical algorithms for simulating such universal quantum circuits, and characterize them by establishing precise connections with a family of magic monotones. Our first simulator introduces a new class of quasiprobability distributions and connects its runtime to a generalized notion of negativity. We prove that this algorithm has significantly improved exponential scaling compared to all prior quasiprobability simulators for qubits. Our second simulator is a new variant of the stabilizer-rank simulation algorithm, extended to work with mixed states and with significantly improved runtime bounds. Our third simulator trades precision for speed by discarding negative quasiprobabilities. We connect each algorithm's performance to a corresponding magic monotone and, by comprehensively characterizing the monotones, we obtain a precise understanding of the simulation runtime and error bounds. Our analysis reveals a deep connection between all three seemingly unrelated simulation techniques and their associated monotones. For tensor products of single-qubit states, we prove that our monotones are all equal to each other, multiplicative and efficiently computable, allowing us to make clear-cut comparisons of the simulators' performance scaling. Furthermore, our monotones establish several asymptotic and non-asymptotic bounds on state interconversion and distillation rates. Beyond the theory of magic states, our classical simulators can be adapted to other resource theories under certain axioms, which we demonstrate through an explicit application to the theory of quantum coherence.Comment: 24+13 pages, 8 figures; final author copy. Since v1: restructured with additional discussion, proof sketches and examples. Since v3: minor revisions to improve clarity, additional acknowledgment

    On the classical simulability of quantum circuits

    Get PDF
    Whether a class of quantum circuits can be efficiently simulated with a classical computer, or is provably hard to simulate, depends quite critically on the precise notion of ā€œclassical simulationā€. We focus on two important notions of simulator, that we refer to as poly-boxes and EPSION-simulators and, discuss how other notions of simulation relate to these. A poly-box is a classical algorithm that outputs additive 1/poly precision estimates of Born probabilities and marginals. We present a general framework used to construct poly-boxes. This framework generalizes a number of recent works on simulation. As an application, we use the general framework to construct a classical additive 1/poly precision Born rule probability estimation algorithm for Clifford plus T circuits. Our algorithm scales exponentially in the number of T gates but polynomially in all other parameters and is intended to be state of the art for this estimation task. We expect this result to be particularly useful in the characterization and verification of near term quantum devices. We argue that the notion of classical simulation we call EPSION-simulation, captures the essence of possessing ā€œequivalent computational powerā€ to the quantum system it simulates: It is statistically impossible to distinguish an agent with access to an EPSION-simulator from one possessing the simulated quantum system. We relate EPSION-simulation to various alternative notions of simulation predominantly focusing on its relation to poly-boxes. Accepting some plausible computational theoretic assumptions, we show that EPSION-simulation is strictly stronger than a poly-box by showing that IQP circuits and unconditioned magic-state injected Clifford circuits are both hard to EPSION-simulate and yet admit a poly-box. In contrast, we also show that these two notions are equivalent under an additional assumption on the sparsity of the output distribution (poly-sparsity)

    From estimation of quantum probabilities to simulation of quantum circuits

    No full text
    Investigating the classical simulability of quantum circuits provides a promising avenue towards understanding the computational power of quantum systems. Whether a class of quantum circuits can be efficiently simulated with a probabilistic classical computer, or is provably hard to simulate, depends quite critically on the precise notion of classical simulation and in particular on the required accuracy. We argue that a notion of classical simulation, which we call epsilon-simulation (or epsilon-simulation for short), captures the essence of possessing equivalent computational power as the quantum system it simulates: It is statistically impossible to distinguish an agent with access to an epsilon-simulator from one possessing the simulated quantum system. We relate epsilon-simulation to various alternative notions of simulation predominantly focusing on a simulator we call a poly-box. A poly-box outputs 1/poly precision additive estimates of Born probabilities and marginals. This notion of simulation has gained prominence through a number of recent simulability results. Accepting some plausible computational theoretic assumptions, we show that epsilon-simulation is strictly stronger than a poly-box by showing that IQP circuits and unconditioned magic-state injected Clifford circuits are both hard to epsilon-simulate and yet admit a poly-box. In contrast, we also show that these two notions are equivalent under an additional assumption on the sparsity of the output distribution (poly-sparsity)
    corecore