3,447 research outputs found

    Quantum computers that can be simulated classically in polynomial time

    Full text link
    A model of quantum computation based on unitary ma-trix operations was introduced by Feynman and Deutsch. It has been asked whether the power of this model exceeds that of classical Turing machines. We show here that a signi cant class of these quantum computations can be sim-ulated classically in polynomial time. In particular we show that two-bit operations characterized by 4 4 matrices in which the sixteen entries obey a set of ve polynomial re-lations can be composed according to certain rules to yield a class of circuits that can be simulated classically in poly-nomial time. This contrasts with the known universality of two-bit operations, and demonstrates that eÆcient quan-tum computation of restricted classes is reconcilable with the Polynomial Time Turing Hypothesis. In other words it is possible that quantum phenomena can be used in a scal-able fashion to make computers but that they do not have superpolynomial speedups compared to Turing machines for any problem. The techniques introduced bring the quantum computational model within the realm of algebraic complex-ity theory. In a manner consistent will one view of quan-tum physics, the wave function is simulated deterministi-cally, and randomization arises only in the course of making measurements. The results generalize the quantum model in that they do not require the matrices to be unitary. In a dierent direction these techniques also yield determinis-tic polynomial time algorithms for the decision and parity problems for certain classes of read-twice Boolean formulae. All our results are based on the use of gates that are dened in terms of their graph matching properties. 1. BACKGROUND The now classical theory of computational complexity is based on the computational model proposed by Turing[30] augmented in two ways: On the one hand random oper

    Quantum computing and the entanglement frontier - Rapporteur talk at the 25th Solvay Conference

    Get PDF
    Quantum information science explores the frontier of highly complex quantum states, the "entanglement frontier". This study is motivated by the observation (widely believed but unproven) that classical systems cannot simulate highly entangled quantum systems efficiently, and we hope to hasten the day when well controlled quantum systems can perform tasks surpassing what can be done in the classical world. One way to achieve such "quantum supremacy" would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers, but there may be other ways that can be achieved sooner, such as simulating exotic quantum states of strongly correlated matter. To operate a large scale quantum computer reliably we will need to overcome the debilitating effects of decoherence, which might be done using "standard" quantum hardware protected by quantum error-correcting codes, or by exploiting the nonabelian quantum statistics of anyons realized in solid state systems, or by combining both methods. Only by challenging the entanglement frontier will we learn whether Nature provides extravagant resources far beyond what the classical world would allow

    Quantum Sampling Problems, BosonSampling and Quantum Supremacy

    Full text link
    There is a large body of evidence for the potential of greater computational power using information carriers that are quantum mechanical over those governed by the laws of classical mechanics. But the question of the exact nature of the power contributed by quantum mechanics remains only partially answered. Furthermore, there exists doubt over the practicality of achieving a large enough quantum computation that definitively demonstrates quantum supremacy. Recently the study of computational problems that produce samples from probability distributions has added to both our understanding of the power of quantum algorithms and lowered the requirements for demonstration of fast quantum algorithms. The proposed quantum sampling problems do not require a quantum computer capable of universal operations and also permit physically realistic errors in their operation. This is an encouraging step towards an experimental demonstration of quantum algorithmic supremacy. In this paper, we will review sampling problems and the arguments that have been used to deduce when sampling problems are hard for classical computers to simulate. Two classes of quantum sampling problems that demonstrate the supremacy of quantum algorithms are BosonSampling and IQP Sampling. We will present the details of these classes and recent experimental progress towards demonstrating quantum supremacy in BosonSampling.Comment: Survey paper first submitted for publication in October 2016. 10 pages, 4 figures, 1 tabl

    What limits the simulation of quantum computers?

    Full text link
    It is imperative that useful quantum computers be very difficult to simulate classically; otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubits NN or the depth DD of the circuit. Real quantum computing devices, however, are characterized by an exponentially decaying fidelity F(1ϵ)ND\mathcal{F} \sim (1-\epsilon)^{ND} with an error rate ϵ\epsilon per operation as small as 1%\approx 1\% for current devices. In this work, we demonstrate that real quantum computers can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wavefunctions using matrix product states (MPS), which capture states with low to moderate entanglement very accurately. This compression introduces a finite error rate ϵ\epsilon so that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly with NN and DD. We illustrate our algorithms with simulations of random circuits for qubits connected in both one and two dimensional lattices. We find that ϵ\epsilon can be decreased at a polynomial cost in computing power down to a minimum error ϵ\epsilon_\infty. Getting below ϵ\epsilon_\infty requires computing resources that increase exponentially with ϵ/ϵ\epsilon_\infty/\epsilon. For a two dimensional array of N=54N=54 qubits and a circuit with Control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor three for similar computing time.Comment: New data added, 14 figure

    Classical simulations of Abelian-group normalizer circuits with intermediate measurements

    Full text link
    Quantum normalizer circuits were recently introduced as generalizations of Clifford circuits [arXiv:1201.4867]: a normalizer circuit over a finite Abelian group GG is composed of the quantum Fourier transform (QFT) over G, together with gates which compute quadratic functions and automorphisms. In [arXiv:1201.4867] it was shown that every normalizer circuit can be simulated efficiently classically. This result provides a nontrivial example of a family of quantum circuits that cannot yield exponential speed-ups in spite of usage of the QFT, the latter being a central quantum algorithmic primitive. Here we extend the aforementioned result in several ways. Most importantly, we show that normalizer circuits supplemented with intermediate measurements can also be simulated efficiently classically, even when the computation proceeds adaptively. This yields a generalization of the Gottesman-Knill theorem (valid for n-qubit Clifford operations [quant-ph/9705052, quant-ph/9807006] to quantum circuits described by arbitrary finite Abelian groups. Moreover, our simulations are twofold: we present efficient classical algorithms to sample the measurement probability distribution of any adaptive-normalizer computation, as well as to compute the amplitudes of the state vector in every step of it. Finally we develop a generalization of the stabilizer formalism [quant-ph/9705052, quant-ph/9807006] relative to arbitrary finite Abelian groups: for example we characterize how to update stabilizers under generalized Pauli measurements and provide a normal form of the amplitudes of generalized stabilizer states using quadratic functions and subgroup cosets.Comment: 26 pages+appendices. Title has changed in this second version. To appear in Quantum Information and Computation, Vol.14 No.3&4, 201

    Complexity classification of two-qubit commuting hamiltonians

    Get PDF
    We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.Comment: 34 page
    corecore