987 research outputs found

    Lower Bounds on the Bounded Coefficient Complexity of Bilinear Maps

    Get PDF
    We prove lower bounds of order nlognn\log n for both the problem to multiply polynomials of degree nn, and to divide polynomials with remainder, in the model of bounded coefficient arithmetic circuits over the complex numbers. These lower bounds are optimal up to order of magnitude. The proof uses a recent idea of R. Raz [Proc. 34th STOC 2002] proposed for matrix multiplication. It reduces the linear problem to multiply a random circulant matrix with a vector to the bilinear problem of cyclic convolution. We treat the arising linear problem by extending J. Morgenstern's bound [J. ACM 20, pp. 305-306, 1973] in a unitarily invariant way. This establishes a new lower bound on the bounded coefficient complexity of linear forms in terms of the singular values of the corresponding matrix. In addition, we extend these lower bounds for linear and bilinear maps to a model of circuits that allows a restricted number of unbounded scalar multiplications.Comment: 19 page

    Approximate unitary tt-designs by short random quantum circuits using nearest-neighbor and long-range gates

    Full text link
    We prove that poly(t)n1/Dpoly(t) \cdot n^{1/D}-depth local random quantum circuits with two qudit nearest-neighbor gates on a DD-dimensional lattice with n qudits are approximate tt-designs in various measures. These include the "monomial" measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was poly(t)npoly(t)\cdot n due to Brandao-Harrow-Horodecki (BHH) for D=1D=1. We also improve the "scrambling" and "decoupling" bounds for spatially local random circuits due to Brown and Fawzi. One consequence of our result is that assuming the polynomial hierarchy (PH) is infinite and that certain counting problems are #P\#P-hard on average, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under the assumption that PH is infinite. However, to show the hardness of approximate sampling using this strategy requires that the quantum circuits have a property called "anti-concentration", meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Thus our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent proposal by the Google Quantum AI group to perform such a sampling task with 49 qubits on a two-dimensional lattice and confirms their conjecture that O(n)O(\sqrt n) depth suffices for anti-concentration. We also prove that anti-concentration is possible in depth O(log(n) loglog(n)) using a different model

    General guarantees for randomized benchmarking with random quantum circuits

    Full text link
    In its many variants, randomized benchmarking (RB) is a broadly used technique for assessing the quality of gate implementations on quantum computers. A detailed theoretical understanding and general guarantees exist for the functioning and interpretation of RB protocols if the gates under scrutiny are drawn uniformly at random from a compact group. In contrast, many practically attractive and scalable RB protocols implement random quantum circuits with local gates randomly drawn from some gate-set. Despite their abundance in practice, for those non-uniform RB protocols, general guarantees under experimentally plausible assumptions are missing. In this work, we derive such guarantees for a large class of RB protocols for random circuits that we refer to as filtered RB. Prominent examples include linear cross-entropy benchmarking, character benchmarking, Pauli-noise tomography and variants of simultaneous RB. Building upon recent results for random circuits, we show that many relevant filtered RB schemes can be realized with random quantum circuits in linear depth, and we provide explicit small constants for common instances. We further derive general sample complexity bounds for filtered RB. We show filtered RB to be sample-efficient for several relevant groups, including protocols addressing higher-order cross-talk. Our theory for non-uniform filtered RB is, in principle, flexible enough to design new protocols for non-universal and analog quantum simulators.Comment: 77 pages, 3 figures. Accepted for a talk at QIP 202

    Approximate F_2-Sketching of Valuation Functions

    Get PDF
    We study the problem of constructing a linear sketch of minimum dimension that allows approximation of a given real-valued function f : F_2^n - > R with small expected squared error. We develop a general theory of linear sketching for such functions through which we analyze their dimension for most commonly studied types of valuation functions: additive, budget-additive, coverage, alpha-Lipschitz submodular and matroid rank functions. This gives a characterization of how many bits of information have to be stored about the input x so that one can compute f under additive updates to its coordinates. Our results are tight in most cases and we also give extensions to the distributional version of the problem where the input x in F_2^n is generated uniformly at random. Using known connections with dynamic streaming algorithms, both upper and lower bounds on dimension obtained in our work extend to the space complexity of algorithms evaluating f(x) under long sequences of additive updates to the input x presented as a stream. Similar results hold for simultaneous communication in a distributed setting

    Computational Distinguishability of Quantum Channels

    Get PDF
    The computational problem of distinguishing two quantum channels is central to quantum computing. It is a generalization of the well-known satisfiability problem from classical to quantum computation. This problem is shown to be surprisingly hard: it is complete for the class QIP of problems that have quantum interactive proof systems, which implies that it is hard for the class PSPACE of problems solvable by a classical computation in polynomial space. Several restrictions of distinguishability are also shown to be hard. It is no easier when restricted to quantum computations of logarithmic depth, to mixed-unitary channels, to degradable channels, or to antidegradable channels. These hardness results are demonstrated by finding reductions between these classes of quantum channels. These techniques have applications outside the distinguishability problem, as the construction for mixed-unitary channels is used to prove that the additivity problem for the classical capacity of quantum channels can be equivalently restricted to the mixed unitary channels.Comment: Ph.D. Thesis, 178 pages, 35 figure

    Approximate degree in classical and quantum computing

    Full text link
    In this book, the authors survey what is known about a particularly natural notion of approximation by polynomials, capturing pointwise approximation over the real numbers.FG-2022-18482 - Alfred P. Sloan Foundation; CNS-2046425 - National Science Foundation; CCF-1947889 - National Science FoundationAccepted manuscrip

    Concrete resource analysis of the quantum linear system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Get PDF
    We provide a detailed estimate for the logical resource requirements of the quantum linear system algorithm (QLSA) [Phys. Rev. Lett. 103, 150502 (2009)] including the recently described elaborations [Phys. Rev. Lett. 110, 250504 (2013)]. Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width, circuit depth, the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set {X, Y, Z, H, S, T, CNOT}. To perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the example problem size N=332,020,680 beyond which, according to a crude big-O complexity comparison, QLSA is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy 0.01 requires an approximate circuit width 340 and circuit depth of order 102510^{25} if oracle costs are excluded, and a circuit width and depth of order 10810^8 and 102910^{29}, respectively, if oracle costs are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.Comment: 37 pages, 40 figure

    Group-theoretic error mitigation enabled by classical shadows and symmetries

    Full text link
    Estimating expectation values is a key subroutine in many quantum algorithms. However, near-term implementations face two major challenges: a limited number of samples to learn a large collection of observables, and the accumulation of errors in devices without quantum error correction. To address these challenges simultaneously, we develop a quantum error-mitigation strategy which unifies the group-theoretic structure of classical-shadow tomography with symmetries in quantum systems of interest. We refer to our protocol as "symmetry-adjusted classical shadows," as it mitigates errors by adjusting estimators according to how known symmetries are corrupted under those errors. As a concrete example, we highlight global U(1)\mathrm{U}(1) symmetry, which manifests in fermions as particle number and in spins as total magnetization, and illustrate their unification with respective classical-shadow protocols. One of our main results establishes rigorous error and sampling bounds under readout errors obeying minimal assumptions. Furthermore, to probe mitigation capabilities against a more comprehensive class of gate-level errors, we perform numerical experiments with a noise model derived from existing quantum processors. Our analytical and numerical results reveal symmetry-adjusted classical shadows as a flexible and low-cost strategy to mitigate errors from noisy quantum experiments in the ubiquitous presence of symmetry.Comment: 45 pages, 13 figures. Typos corrected and references updated. Open-source code available at https://github.com/zhao-andrew/symmetry-adjusted-classical-shadow
    corecore