3,447 research outputs found
Quantum computers that can be simulated classically in polynomial time
A model of quantum computation based on unitary ma-trix operations was introduced by Feynman and Deutsch. It has been asked whether the power of this model exceeds that of classical Turing machines. We show here that a signi cant class of these quantum computations can be sim-ulated classically in polynomial time. In particular we show that two-bit operations characterized by 4 4 matrices in which the sixteen entries obey a set of ve polynomial re-lations can be composed according to certain rules to yield a class of circuits that can be simulated classically in poly-nomial time. This contrasts with the known universality of two-bit operations, and demonstrates that eÆcient quan-tum computation of restricted classes is reconcilable with the Polynomial Time Turing Hypothesis. In other words it is possible that quantum phenomena can be used in a scal-able fashion to make computers but that they do not have superpolynomial speedups compared to Turing machines for any problem. The techniques introduced bring the quantum computational model within the realm of algebraic complex-ity theory. In a manner consistent will one view of quan-tum physics, the wave function is simulated deterministi-cally, and randomization arises only in the course of making measurements. The results generalize the quantum model in that they do not require the matrices to be unitary. In a dierent direction these techniques also yield determinis-tic polynomial time algorithms for the decision and parity problems for certain classes of read-twice Boolean formulae. All our results are based on the use of gates that are dened in terms of their graph matching properties. 1. BACKGROUND The now classical theory of computational complexity is based on the computational model proposed by Turing[30] augmented in two ways: On the one hand random oper
Recommended from our members
The power of restricted quantum computational models
Restricted models of quantum computation are ones that have less power than a universal quantum computer. We studied the consequences of removing particular properties from a universal quantum computer to discover whether those resources were important.In the first part of the thesis we studied universal quantum computers which are implemented using Clifford gates, adaptive measurements, and magic states. The Gottesman–Knill theorem shows that circuits in this form which do not use magic states can be simulated by a classical computer. We extended this result to show that all circuits in this form can be partially simulated; the same computation can be implemented using a smaller quantum computer with the assistance of some polynomial time classical computation. We also identified a subclass of these computations that can be shown to not be entirely classically simulated by any method, given certain complexity theoretic assumptions are true.In the next part of the thesis we examine the role of entanglement in noisy quantum computations. Entanglement is necessary for noiseless quantum computers to have any quantum advantage, but it is not known whether the same is true for mixed state quantum computers. We show that entanglement, unexpectedly, does play a crucial role in the most well known mixed state computer: the one clean qubit model.Finally, we investigate how closely classical simulation is related to another idea of classicality.This notion captures how easily the final state of a computation can be learnt, given samples of measurements from it. We find an extra condition under which a circuit that is classically simulable is also efficiently learnable
Quantum computing and the entanglement frontier - Rapporteur talk at the 25th Solvay Conference
Quantum information science explores the frontier of highly complex quantum states,
the "entanglement frontier". This study is motivated by the observation (widely believed
but unproven) that classical systems cannot simulate highly entangled quantum systems
efficiently, and we hope to hasten the day when well controlled quantum systems can
perform tasks surpassing what can be done in the classical world. One way to achieve
such "quantum supremacy" would be to run an algorithm on a quantum computer which
solves a problem with a super-polynomial speedup relative to classical computers, but
there may be other ways that can be achieved sooner, such as simulating exotic quantum
states of strongly correlated matter. To operate a large scale quantum computer reliably
we will need to overcome the debilitating effects of decoherence, which might be done
using "standard" quantum hardware protected by quantum error-correcting codes, or by
exploiting the nonabelian quantum statistics of anyons realized in solid state systems,
or by combining both methods. Only by challenging the entanglement frontier will we
learn whether Nature provides extravagant resources far beyond what the classical world
would allow
Quantum Sampling Problems, BosonSampling and Quantum Supremacy
There is a large body of evidence for the potential of greater computational
power using information carriers that are quantum mechanical over those
governed by the laws of classical mechanics. But the question of the exact
nature of the power contributed by quantum mechanics remains only partially
answered. Furthermore, there exists doubt over the practicality of achieving a
large enough quantum computation that definitively demonstrates quantum
supremacy. Recently the study of computational problems that produce samples
from probability distributions has added to both our understanding of the power
of quantum algorithms and lowered the requirements for demonstration of fast
quantum algorithms. The proposed quantum sampling problems do not require a
quantum computer capable of universal operations and also permit physically
realistic errors in their operation. This is an encouraging step towards an
experimental demonstration of quantum algorithmic supremacy. In this paper, we
will review sampling problems and the arguments that have been used to deduce
when sampling problems are hard for classical computers to simulate. Two
classes of quantum sampling problems that demonstrate the supremacy of quantum
algorithms are BosonSampling and IQP Sampling. We will present the details of
these classes and recent experimental progress towards demonstrating quantum
supremacy in BosonSampling.Comment: Survey paper first submitted for publication in October 2016. 10
pages, 4 figures, 1 tabl
What limits the simulation of quantum computers?
It is imperative that useful quantum computers be very difficult to simulate
classically; otherwise classical computers could be used for the applications
envisioned for the quantum ones. Perfect quantum computers are unarguably
exponentially difficult to simulate: the classical resources required grow
exponentially with the number of qubits or the depth of the circuit.
Real quantum computing devices, however, are characterized by an exponentially
decaying fidelity with an error rate
per operation as small as for current devices. In this
work, we demonstrate that real quantum computers can be simulated at a tiny
fraction of the cost that would be needed for a perfect quantum computer. Our
algorithms compress the representations of quantum wavefunctions using matrix
product states (MPS), which capture states with low to moderate entanglement
very accurately. This compression introduces a finite error rate so
that the algorithms closely mimic the behavior of real quantum computing
devices. The computing time of our algorithm increases only linearly with
and . We illustrate our algorithms with simulations of random circuits for
qubits connected in both one and two dimensional lattices. We find that
can be decreased at a polynomial cost in computing power down to a
minimum error . Getting below requires
computing resources that increase exponentially with
. For a two dimensional array of qubits and a
circuit with Control-Z gates, error rates better than state-of-the-art devices
can be obtained on a laptop in a few hours. For more complex gates such as a
swap gate followed by a controlled rotation, the error rate increases by a
factor three for similar computing time.Comment: New data added, 14 figure
Classical simulations of Abelian-group normalizer circuits with intermediate measurements
Quantum normalizer circuits were recently introduced as generalizations of
Clifford circuits [arXiv:1201.4867]: a normalizer circuit over a finite Abelian
group is composed of the quantum Fourier transform (QFT) over G, together
with gates which compute quadratic functions and automorphisms. In
[arXiv:1201.4867] it was shown that every normalizer circuit can be simulated
efficiently classically. This result provides a nontrivial example of a family
of quantum circuits that cannot yield exponential speed-ups in spite of usage
of the QFT, the latter being a central quantum algorithmic primitive. Here we
extend the aforementioned result in several ways. Most importantly, we show
that normalizer circuits supplemented with intermediate measurements can also
be simulated efficiently classically, even when the computation proceeds
adaptively. This yields a generalization of the Gottesman-Knill theorem (valid
for n-qubit Clifford operations [quant-ph/9705052, quant-ph/9807006] to quantum
circuits described by arbitrary finite Abelian groups. Moreover, our
simulations are twofold: we present efficient classical algorithms to sample
the measurement probability distribution of any adaptive-normalizer
computation, as well as to compute the amplitudes of the state vector in every
step of it. Finally we develop a generalization of the stabilizer formalism
[quant-ph/9705052, quant-ph/9807006] relative to arbitrary finite Abelian
groups: for example we characterize how to update stabilizers under generalized
Pauli measurements and provide a normal form of the amplitudes of generalized
stabilizer states using quadratic functions and subgroup cosets.Comment: 26 pages+appendices. Title has changed in this second version. To
appear in Quantum Information and Computation, Vol.14 No.3&4, 201
Complexity classification of two-qubit commuting hamiltonians
We classify two-qubit commuting Hamiltonians in terms of their computational
complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can
apply to any pair of qubits, starting in a computational basis state. We prove
a dichotomy theorem: either this model is efficiently classically simulable or
it allows one to sample from probability distributions which cannot be sampled
from classically unless the polynomial hierarchy collapses. Furthermore, the
only simulable Hamiltonians are those which fail to generate entanglement. This
shows that generic two-qubit commuting Hamiltonians can be used to perform
computational tasks which are intractable for classical computers under
plausible assumptions. Our proof makes use of new postselection gadgets and Lie
theory.Comment: 34 page
- …