240 research outputs found

    The Power of One Clean Qubit in Communication Complexity

    Get PDF
    We study quantum communication protocols, in which the players\u27 storage starts out in a state where one qubit is in a pure state, and all other qubits are totally mixed (i.e. in a random state), and no other storage is available (for messages or internal computations). This restriction on the available quantum memory has been studied extensively in the model of quantum circuits, and it is known that classically simulating quantum circuits operating on such memory is hard when the additive error of the simulation is exponentially small (in the input length), under the assumption that the polynomial hierarchy does not collapse. We study this setting in communication complexity. The goal is to consider larger additive error for simulation-hardness results, and to not use unproven assumptions. We define a complexity measure for this model that takes into account that standard error reduction techniques do not work here. We define a clocked and a semi-unclocked model, and describe efficient simulations between those. We characterize a one-way communication version of the model in terms of weakly unbounded error communication complexity. Our main result is that there is a quantum protocol using one clean qubit only and using O(log n) qubits of communication, such that any classical protocol simulating the acceptance behaviour of the quantum protocol within additive error 1/poly(n) needs communication ?(n). We also describe a candidate problem, for which an exponential gap between the one-clean-qubit communication complexity and the randomized communication complexity is likely to hold, and hence a classical simulation of the one-clean-qubit model within constant additive error might be hard in communication complexity. We describe a geometrical conjecture that implies the lower bound

    Complexity classification of two-qubit commuting hamiltonians

    Get PDF
    We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.Comment: 34 page

    Merlin-Arthur with efficient quantum Merlin and quantum supremacy for the second level of the Fourier hierarchy

    Get PDF
    We introduce a simple sub-universal quantum computing model, which we call the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a classical reversible circuit sandwiched by two layers of Hadamard gates, and therefore it is in the second level of the Fourier hierarchy. We show that output probability distributions of the HC1Q model cannot be classically efficiently sampled within a multiplicative error unless the polynomial-time hierarchy collapses to the second level. The proof technique is different from those used for previous sub-universal models, such as IQP, Boson Sampling, and DQC1, and therefore the technique itself might be useful for finding other sub-universal models that are hard to classically simulate. We also study the classical verification of quantum computing in the second level of the Fourier hierarchy. To this end, we define a promise problem, which we call the probability distribution distinguishability with maximum norm (PDD-Max). It is a promise problem to decide whether output probability distributions of two quantum circuits are far apart or close. We show that PDD-Max is BQP-complete, but if the two circuits are restricted to some types in the second level of the Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a Merlin-Arthur system with quantum polynomial-time Merlin and classical probabilistic polynomial-time Arthur.Comment: 30 pages, 4 figure

    Power of Quantum Computation with Few Clean Qubits

    Get PDF
    This paper investigates the power of polynomial-time quantum computation in which only a very limited number of qubits are initially clean in the |0> state, and all the remaining qubits are initially in the totally mixed state. No initializations of qubits are allowed during the computation, nor intermediate measurements. The main results of this paper are unexpectedly strong error-reducible properties of such quantum computations. It is proved that any problem solvable by a polynomial-time quantum computation with one-sided bounded error that uses logarithmically many clean qubits can also be solvable with exponentially small one-sided error using just two clean qubits, and with polynomially small one-sided error using just one clean qubit. It is further proved in the case of two-sided bounded error that any problem solvable by such a computation with a constant gap between completeness and soundness using logarithmically many clean qubits can also be solvable with exponentially small two-sided error using just two clean qubits. If only one clean qubit is available, the problem is again still solvable with exponentially small error in one of the completeness and soundness and polynomially small error in the other. As an immediate consequence of the above result for the two-sided-error case, it follows that the TRACE ESTIMATION problem defined with fixed constant threshold parameters is complete for the classes of problems solvable by polynomial-time quantum computations with completeness 2/3 and soundness 1/3 using logarithmically many clean qubits and just one clean qubit. The techniques used for proving the error-reduction results may be of independent interest in themselves, and one of the technical tools can also be used to show the hardness of weak classical simulations of one-clean-qubit computations (i.e., DQC1 computations).Comment: 44 pages + cover page; the results in Section 8 are overlapping with the main results in arXiv:1409.677

    Quantum computing and the entanglement frontier - Rapporteur talk at the 25th Solvay Conference

    Get PDF
    Quantum information science explores the frontier of highly complex quantum states, the "entanglement frontier". This study is motivated by the observation (widely believed but unproven) that classical systems cannot simulate highly entangled quantum systems efficiently, and we hope to hasten the day when well controlled quantum systems can perform tasks surpassing what can be done in the classical world. One way to achieve such "quantum supremacy" would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers, but there may be other ways that can be achieved sooner, such as simulating exotic quantum states of strongly correlated matter. To operate a large scale quantum computer reliably we will need to overcome the debilitating effects of decoherence, which might be done using "standard" quantum hardware protected by quantum error-correcting codes, or by exploiting the nonabelian quantum statistics of anyons realized in solid state systems, or by combining both methods. Only by challenging the entanglement frontier will we learn whether Nature provides extravagant resources far beyond what the classical world would allow
    • …
    corecore