39 research outputs found
Commuting Quantum Circuits with Few Outputs are Unlikely to be Classically Simulatable
We study the classical simulatability of commuting quantum circuits with n
input qubits and O(log n) output qubits, where a quantum circuit is classically
simulatable if its output probability distribution can be sampled up to an
exponentially small additive error in classical polynomial time. First, we show
that there exists a commuting quantum circuit that is not classically
simulatable unless the polynomial hierarchy collapses to the third level. This
is the first formal evidence that a commuting quantum circuit is not
classically simulatable even when the number of output qubits is exponentially
small. Then, we consider a generalized version of the circuit and clarify the
condition under which it is classically simulatable. Lastly, we apply the
argument for the above evidence to Clifford circuits in a similar setting and
provide evidence that such a circuit augmented by a depth-1 non-Clifford layer
is not classically simulatable. These results reveal subtle differences between
quantum and classical computation.Comment: 19 pages, 6 figures; v2: Theorems 1 and 3 improved, proofs modifie
Classical simulation complexity of extended Clifford circuits
Clifford gates are a winsome class of quantum operations combining
mathematical elegance with physical significance. The Gottesman-Knill theorem
asserts that Clifford computations can be classically efficiently simulated but
this is true only in a suitably restricted setting. Here we consider Clifford
computations with a variety of additional ingredients: (a) strong vs. weak
simulation, (b) inputs being computational basis states vs. general product
states, (c) adaptive vs. non-adaptive choices of gates for circuits involving
intermediate measurements, (d) single line outputs vs. multi-line outputs. We
consider the classical simulation complexity of all combinations of these
ingredients and show that many are not classically efficiently simulatable
(subject to common complexity assumptions such as P not equal to NP). Our
results reveal a surprising proximity of classical to quantum computing power
viz. a class of classically simulatable quantum circuits which yields universal
quantum computation if extended by a purely classical additional ingredient
that does not extend the class of quantum processes occurring.Comment: 17 pages, 1 figur
Power of Quantum Computation with Few Clean Qubits
This paper investigates the power of polynomial-time quantum computation in
which only a very limited number of qubits are initially clean in the |0>
state, and all the remaining qubits are initially in the totally mixed state.
No initializations of qubits are allowed during the computation, nor
intermediate measurements. The main results of this paper are unexpectedly
strong error-reducible properties of such quantum computations. It is proved
that any problem solvable by a polynomial-time quantum computation with
one-sided bounded error that uses logarithmically many clean qubits can also be
solvable with exponentially small one-sided error using just two clean qubits,
and with polynomially small one-sided error using just one clean qubit. It is
further proved in the case of two-sided bounded error that any problem solvable
by such a computation with a constant gap between completeness and soundness
using logarithmically many clean qubits can also be solvable with exponentially
small two-sided error using just two clean qubits. If only one clean qubit is
available, the problem is again still solvable with exponentially small error
in one of the completeness and soundness and polynomially small error in the
other. As an immediate consequence of the above result for the two-sided-error
case, it follows that the TRACE ESTIMATION problem defined with fixed constant
threshold parameters is complete for the classes of problems solvable by
polynomial-time quantum computations with completeness 2/3 and soundness 1/3
using logarithmically many clean qubits and just one clean qubit. The
techniques used for proving the error-reduction results may be of independent
interest in themselves, and one of the technical tools can also be used to show
the hardness of weak classical simulations of one-clean-qubit computations
(i.e., DQC1 computations).Comment: 44 pages + cover page; the results in Section 8 are overlapping with
the main results in arXiv:1409.677
Complexity classification of two-qubit commuting hamiltonians
We classify two-qubit commuting Hamiltonians in terms of their computational
complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can
apply to any pair of qubits, starting in a computational basis state. We prove
a dichotomy theorem: either this model is efficiently classically simulable or
it allows one to sample from probability distributions which cannot be sampled
from classically unless the polynomial hierarchy collapses. Furthermore, the
only simulable Hamiltonians are those which fail to generate entanglement. This
shows that generic two-qubit commuting Hamiltonians can be used to perform
computational tasks which are intractable for classical computers under
plausible assumptions. Our proof makes use of new postselection gadgets and Lie
theory.Comment: 34 page
Merlin-Arthur with efficient quantum Merlin and quantum supremacy for the second level of the Fourier hierarchy
We introduce a simple sub-universal quantum computing model, which we call
the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a
classical reversible circuit sandwiched by two layers of Hadamard gates, and
therefore it is in the second level of the Fourier hierarchy. We show that
output probability distributions of the HC1Q model cannot be classically
efficiently sampled within a multiplicative error unless the polynomial-time
hierarchy collapses to the second level. The proof technique is different from
those used for previous sub-universal models, such as IQP, Boson Sampling, and
DQC1, and therefore the technique itself might be useful for finding other
sub-universal models that are hard to classically simulate. We also study the
classical verification of quantum computing in the second level of the Fourier
hierarchy. To this end, we define a promise problem, which we call the
probability distribution distinguishability with maximum norm (PDD-Max). It is
a promise problem to decide whether output probability distributions of two
quantum circuits are far apart or close. We show that PDD-Max is BQP-complete,
but if the two circuits are restricted to some types in the second level of the
Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a
Merlin-Arthur system with quantum polynomial-time Merlin and classical
probabilistic polynomial-time Arthur.Comment: 30 pages, 4 figure
The Power of One Clean Qubit in Communication Complexity
We study quantum communication protocols, in which the players\u27 storage starts out in a state where one qubit is in a pure state, and all other qubits are totally mixed (i.e. in a random state), and no other storage is available (for messages or internal computations). This restriction on the available quantum memory has been studied extensively in the model of quantum circuits, and it is known that classically simulating quantum circuits operating on such memory is hard when the additive error of the simulation is exponentially small (in the input length), under the assumption that the polynomial hierarchy does not collapse.
We study this setting in communication complexity. The goal is to consider larger additive error for simulation-hardness results, and to not use unproven assumptions.
We define a complexity measure for this model that takes into account that standard error reduction techniques do not work here. We define a clocked and a semi-unclocked model, and describe efficient simulations between those.
We characterize a one-way communication version of the model in terms of weakly unbounded error communication complexity.
Our main result is that there is a quantum protocol using one clean qubit only and using O(log n) qubits of communication, such that any classical protocol simulating the acceptance behaviour of the quantum protocol within additive error 1/poly(n) needs communication ?(n).
We also describe a candidate problem, for which an exponential gap between the one-clean-qubit communication complexity and the randomized communication complexity is likely to hold, and hence a classical simulation of the one-clean-qubit model within constant additive error might be hard in communication complexity. We describe a geometrical conjecture that implies the lower bound