7,302 research outputs found

    New Hardness Results for the Permanent Using Linear Optics

    Get PDF
    In 2011, Aaronson gave a striking proof, based on quantum linear optics, that the problem of computing the permanent of a matrix is #P-hard. Aaronson\u27s proof led naturally to hardness of approximation results for the permanent, and it was arguably simpler than Valiant\u27s seminal proof of the same fact in 1979. Nevertheless, it did not show #P-hardness of the permanent for any class of matrices which was not previously known. In this paper, we present a collection of new results about matrix permanents that are derived primarily via these linear optical techniques. First, we show that the problem of computing the permanent of a real orthogonal matrix is #P-hard. Much like Aaronson\u27s original proof, this implies that even a multiplicative approximation remains #P-hard to compute. The hardness result even translates to permanents of orthogonal matrices over the finite field F_{p^4} for p != 2, 3. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan. Finally, we use more elementary arguments to prove #P-hardness for the permanent of a positive semidefinite matrix. This result shows that certain probabilities of boson sampling experiments with thermal states are hard to compute exactly, despite the fact that they can be efficiently sampled by a classical computer

    A Linear-Optical Proof that the Permanent is #P-Hard

    Get PDF
    One of the crown jewels of complexity theory is Valiant's 1979 theorem that computing the permanent of an n*n matrix is #P-hard. Here we show that, by using the model of linear-optical quantum computing---and in particular, a universality theorem due to Knill, Laflamme, and Milburn---one can give a different and arguably more intuitive proof of this theorem.Comment: 12 pages, 2 figures, to appear in Proceedings of the Royal Society A. doi: 10.1098/rspa.2011.023

    The Computational Complexity of Linear Optics

    Full text link
    We give new evidence that quantum computers -- moreover, rudimentary quantum computers built entirely out of linear-optical elements -- cannot be efficiently simulated by classical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions. Our first result says that, if there exists a polynomial-time classical algorithm that samples from the same probability distribution as a linear-optical network, then P^#P=BPP^NP, and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation. Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the "Permanent-of-Gaussians Conjecture", which says that it is #P-hard to approximate the permanent of a matrix A of independent N(0,1) Gaussian entries, with high probability over A; and the "Permanent Anti-Concentration Conjecture", which says that |Per(A)|>=sqrt(n!)/poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application. This paper does not assume knowledge of quantum optics. Indeed, part of its goal is to develop the beautiful theory of noninteracting bosons underlying our model, and its connection to the permanent function, in a self-contained way accessible to theoretical computer scientists.Comment: 94 pages, 4 figure

    Quantum Sampling Problems, BosonSampling and Quantum Supremacy

    Full text link
    There is a large body of evidence for the potential of greater computational power using information carriers that are quantum mechanical over those governed by the laws of classical mechanics. But the question of the exact nature of the power contributed by quantum mechanics remains only partially answered. Furthermore, there exists doubt over the practicality of achieving a large enough quantum computation that definitively demonstrates quantum supremacy. Recently the study of computational problems that produce samples from probability distributions has added to both our understanding of the power of quantum algorithms and lowered the requirements for demonstration of fast quantum algorithms. The proposed quantum sampling problems do not require a quantum computer capable of universal operations and also permit physically realistic errors in their operation. This is an encouraging step towards an experimental demonstration of quantum algorithmic supremacy. In this paper, we will review sampling problems and the arguments that have been used to deduce when sampling problems are hard for classical computers to simulate. Two classes of quantum sampling problems that demonstrate the supremacy of quantum algorithms are BosonSampling and IQP Sampling. We will present the details of these classes and recent experimental progress towards demonstrating quantum supremacy in BosonSampling.Comment: Survey paper first submitted for publication in October 2016. 10 pages, 4 figures, 1 tabl

    BosonSampling with Lost Photons

    Get PDF
    BosonSampling is an intermediate model of quantum computation where linear-optical networks are used to solve sampling problems expected to be hard for classical computers. Since these devices are not expected to be universal for quantum computation, it remains an open question of whether any error-correction techniques can be applied to them, and thus it is important to investigate how robust the model is under natural experimental imperfections, such as losses and imperfect control of parameters. Here we investigate the complexity of BosonSampling under photon losses---more specifically, the case where an unknown subset of the photons are randomly lost at the sources. We show that, if kk out of nn photons are lost, then we cannot sample classically from a distribution that is 1/nΘ(k)1/n^{\Theta(k)}-close (in total variation distance) to the ideal distribution, unless a BPPNP\text{BPP}^{\text{NP}} machine can estimate the permanents of Gaussian matrices in nO(k)n^{O(k)} time. In particular, if kk is constant, this implies that simulating lossy BosonSampling is hard for a classical computer, under exactly the same complexity assumption used for the original lossless case.Comment: 12 pages. v2: extended concluding sectio
    • …
    corecore