30 research outputs found

    How many qubits are needed for quantum computational supremacy?

    Get PDF
    Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy (PH) does not collapse, a stronger version of the statement that P ā‰ \neq NP, which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse assumption. Each version is parameterized by a constant aa and asserts that certain specific computational problems with input size nn require 2an2^{an} time steps to be solved by a non-deterministic algorithm. Then, we choose a specific value of aa for each version that we argue makes the assumption plausible, and based on these conjectures we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and boson sampling circuits (i.e. linear optical networks) with 98 photons are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. In the first two cases, we extend this to constant additive error by introducing an average-case fine-grained conjecture.Comment: 24 pages + 3 appendices, 8 figures. v2: number of qubits calculation updated and conjectures clarified after becoming aware of Ref. [42]. v3: Section IV and Appendix C added to incorporate additive-error simulation

    Quantum supremacy using a programmable superconducting processor

    Get PDF
    The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2āµĀ³ (about 10Ā¹ā¶). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million timesā€”our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm

    Quantum supremacy using a programmable superconducting processor

    Get PDF
    The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2āµĀ³ (about 10Ā¹ā¶). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million timesā€”our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm

    Methods for parallel quantum circuit synthesis, fault-tolerant quantum RAM, and quantum state tomography

    Get PDF
    The pace of innovation in quantum information science has recently exploded due to the hope that a quantum computer will be able to solve a multitude of problems that are intractable using classical hardware. Current quantum devices are in what has been termed the ``noisy intermediate-scale quantum'', or NISQ stage. Quantum hardware available today with 50-100 physical qubits may be among the first to demonstrate a quantum advantage. However, there are many challenges to overcome, such as dealing with noise, lowering error rates, improving coherence times, and scalability. We are at a time in the field where minimization of resources is critical so that we can run our algorithms sooner rather than later. Running quantum algorithms ``at scale'' incurs a massive amount of resources, from the number of qubits required to the circuit depth. A large amount of this is due to the need to implement operations fault-tolerantly using error-correcting codes. For one, to run an algorithm we must be able to efficiently read in and output data. Fault-tolerantly implementing quantum memories may become an input bottleneck for quantum algorithms, including many which would otherwise yield massive improvements in algorithm complexity. We will also need efficient methods for tomography to characterize and verify our processes and outputs. Researchers will require tools to automate the design of large quantum algorithms, to compile, optimize, and verify their circuits, and to do so in a way that minimizes operations that are expensive in a fault-tolerant setting. Finally, we will also need overarching frameworks to characterize the resource requirements themselves. Such tools must be easily adaptable to new developments in the field, and allow users to explore tradeoffs between their parameters of interest. This thesis contains three contributions to this effort: improving circuit synthesis using large-scale parallelization; designing circuits for quantum random-access memories and analyzing various time/space tradeoffs; using the mathematical structure of discrete phase space to select subsets of tomographic measurements. For each topic the theoretical work is supplemented by a software package intended to allow others researchers to easily verify, use, and expand upon the techniques herein

    Classical Computation in the Quantum World

    Full text link
    Quantum computation is by far the most powerful computational model allowed by the laws of physics. By carefully manipulating microscopic systems governed by quantum mechanics, one can efficiently solve computational problems that may be classically intractable; on the contrary, such speed-ups are rarely possible without the help of classical computation, since most quantum algorithms heavily rely on subroutines that are purely classical. A better understanding of the relationship between classical and quantum computation is indispensable, in particular in an era where the first quantum device exceeding classical computational power is within reach. In the first part of the thesis, we study some differences between classical and quantum computation. We first show that quantum cryptographic hashing is maximally resilient against classical leakage, a property beyond reach for any classical hash function. Next, we consider the limitation of strong (amplitude-wise) simulation of quantum computation. We prove an unconditional and explicit complexity lower bound for a category of simulations called monotone strong simulation and further prove conditional complexity lower bounds for general strong simulation techniques. Both results indicate that strong simulation is fundamentally unscalable. In the second part of the thesis, we propose classical algorithms that facilitate quantum computing. We propose a new classical algorithm for the synthesis of a quantum algorithm paradigm called quantum signal processing. Empirically, our algorithm demonstrates numerical stability and acceleration of more than one magnitude compared to state-of-the-art algorithms. Finally, we propose a randomized algorithm for transversally switching between arbitrary stabilizer quantum error-correcting codes. It has the property of preserving the code distance and thus might prove useful for designing fault-tolerant code-switching schemes.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149943/1/cupjinh_1.pd

    The complexity of simulating quantum physics: dynamics and equilibrium

    Get PDF
    Quantum computing is the offspring of quantum mechanics and computer science, two great scientific fields founded in the 20th century. Quantum computing is a relatively young field and is recognized as having the potential to revolutionize science and technology in the coming century. The primary question in this field is essentially to ask which problems are feasible with potential quantum computers and which are not. In this dissertation, we study this question with a physical bent of mind. We apply tools from computer science and mathematical physics to study the complexity of simulating quantum systems. In general, our goal is to identify parameter regimes under which simulating quantum systems is easy (efficiently solvable) or hard (not efficiently solvable). This study leads to an understanding of the features that make certain problems easy or hard to solve. We also get physical insight into the behavior of the system being simulated. In the first part of this dissertation, we study the classical complexity of simulating quantum dynamics. In general, the systems we study transition from being easy to simulate at short times to being harder to simulate at later times. We argue that the transition timescale is a useful measure for various Hamiltonians and is indicative of the physics behind the change in complexity. We illustrate this idea for a specific bosonic system, obtaining a complexity phase diagram that delineates the system into easy or hard for simulation. We also prove that the phase diagram is robust, supporting our statement that the phase diagram is indicative of the underlying physics. In the next part, we study open quantum systems from the point of view of their potential to encode hard computational problems. We study a class of fermionic Hamiltonians subject to Markovian noise described by Lindblad jump operators and illustrate how, sometimes, certain Lindblad operators can induce computational complexity into the problem. Specifically, we show that these operators can implement entangling gates, which can be used for universal quantum computation. We also study a system of bosons with Gaussian initial states subject to photon loss and detected using photon-number-resolving measurements. We show that such systems can remain hard to simulate exactly and retain a relic of the "quantumness" present in the lossless system. Finally, in the last part of this dissertation, we study the complexity of simulating a class of equilibrium states, namely ground states. We give complexity-theoretic evidence to identify two structural properties that can make ground states easier to simulate. These are the existence of a spectral gap and the existence of a classical description of the ground state. Our findings complement and guide efforts in the search for efficient algorithms

    Benchmarking, verifying and utilising near term quantum technology

    Get PDF
    Quantum computers can, in theory, impressively reduce the time required to solve many pertinent problems. Such problems are found in applications as diverse as cryptography, machine learning and chemistry, to name a few. However, in practice the set of problems which can be solved depends on the amount and quality of the quantum resources available. With the addition of more qubits, improvements in noise levels, the development of quantum networks, and so on, comes more computing power. Motivated by the desire to measure the power of these devices as their capabilities change, this thesis explores the verification, characterisation and benchmarking techniques that are appropriate at each stage of development. We study the techniques that become available with each advance, and the ways that such techniques can be used to guide further development of quantum devices and their control software. Our focus is on advancements towards the first example of practical certifiable quantum computational supremacy; when a quantum computer demonstrably outperforms all classical computers at a task of practical concern. Doing so allows us to look a little beyond recent demonstrations of quantum computational supremacy for its own sake. Systems consisting of only a few noisy qubits can be simulated by a classical computer. While this reduces the applicability of quantum technology of this size, we first provide a methodology for using classical simulations to guide progress towards demonstrations of quantum computational supremacy. Using measurements of the noise levels present in the NQIT Q20:20 device, an ion-trap based quantum computer, we use classical simulations to predict and prepare for the performance of larger devices with similar characteristics. We identify the noise sources that are the most impactful, and simulate the effectiveness of approaches to mitigating them. As quantum technology advances, classically simulating it becomes increasingly resource intensive. However, simulations remain useful as a point of comparison against which to benchmark the performance of quantum devices. For so called ā€˜random quantum circuitsā€™, such benchmarking techniques have been developed to support claims of demonstrations of quantum computational supremacy. To give better indications of the deviceā€™s performance in practice, instances of computations derived for practical applications have been used to benchmark devices. Our second contribution is to introduce a suite of circuits derived from structures that are common to many instances of computations derived for practical applications, contrasting with the aforementioned approach of using a collection of particular instances. This allows us to make broadly applicable predictions of performance, which are indicative of the deviceā€™s behaviour when investigating applications of concern. We use this suite to benchmark all layers of the quantum computing stack, exploring the interplay between the compilation strategy, device, and the computation itself. The circuit structures in the suite are sufficiently diverse to provide insights into the noise channels present in several real devices, and into the applications for which each quantum computing stack is best suited. We consider several figures of merit by which to assess performance when implementing these circuits, taking care to minimise the required number of uses of the quantum device. As our third contribution, we consider benchmarking devices performing Instantaneous Quantum Polynomial time (IQP) computations; a subset of all the computations quantum computers are capable of performing in polynomial time. By using only a commuting gate set, IQP circuits do not require the development of a universal quantum computer, but are still thought impossible to simulate efficiently on a classical computer. Utilising a small quantum network, which allows for the transmission of single qubits, we introduce an approach to benchmarking the performance of devices capable of implementing IQP computations. As the resource consumption of our benchmarking technique grows reasonably as the size of the device grows, it enables us to benchmark IQP capable devices when they are of sufficient size to demonstrate quantum computational supremacy, and indeed to certify demonstrations of quantum computational supremacy. The approach we introduce is constructed by concealing some secret structure within an IQP computation. This structure can be taken advantage of by a quantum computer, but not by a classical one, in order to prove it is capable of accurately implementing IQP circuits. To achieve this we derive an implementation of IQP circuits which keeps the computation, and as a result the structure introduced, hidden from the device being tested. We prove this implementation to be information-theoretically and composably secure. In the work described above we explore verification, characterisation and benchmarking of quantum technology both as it advances to demonstrations of quantum computational supremacy, and when it is applied to real world problems. Finally, we consider demonstrations of quantum computational supremacy with an instance of these real world problems. We consider quantum machine learning, and generative modelling in particular. Generative modelling is the task of producing new samples from a distribution, given a collection of samples from that distribution. We introduce and define ā€˜quantum learning supremacyā€™, which captures our intuitive notion of a demonstration of quantum computational supremacy in this setting, and allows us to speak formally about generative modelling tasks that can be completed by quantum, but not classical, computers. We introduce the Quantum Circuit Ising Born Machine (QCIBM), which consists of a parametrised quantum circuit and a classical optimisation loop to train the parameters, as a route to demonstrating quantum learning supremacy. We adapt results that exist for IQP circuits in order to argue that the QCIBM might indeed be used to demonstrate quantum learning supremacy. We discuss training procedures for the QCIBM, and Quantum Circuit Born Machines generally, and their implications on demonstrations of quantum learning supremacy
    corecore