60,359 research outputs found

    The Computational Complexity of Linear Optics

    Full text link
    We give new evidence that quantum computers -- moreover, rudimentary quantum computers built entirely out of linear-optical elements -- cannot be efficiently simulated by classical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions. Our first result says that, if there exists a polynomial-time classical algorithm that samples from the same probability distribution as a linear-optical network, then P^#P=BPP^NP, and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation. Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the "Permanent-of-Gaussians Conjecture", which says that it is #P-hard to approximate the permanent of a matrix A of independent N(0,1) Gaussian entries, with high probability over A; and the "Permanent Anti-Concentration Conjecture", which says that |Per(A)|>=sqrt(n!)/poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application. This paper does not assume knowledge of quantum optics. Indeed, part of its goal is to develop the beautiful theory of noninteracting bosons underlying our model, and its connection to the permanent function, in a self-contained way accessible to theoretical computer scientists.Comment: 94 pages, 4 figure

    Spatio-angular Minimum-variance Tomographic Controller for Multi-Object Adaptive Optics systems

    Full text link
    Multi-object astronomical adaptive-optics (MOAO) is now a mature wide-field observation mode to enlarge the adaptive-optics-corrected field in a few specific locations over tens of arc-minutes. The work-scope provided by open-loop tomography and pupil conjugation is amenable to a spatio-angular Linear-Quadratic Gaussian (SA-LQG) formulation aiming to provide enhanced correction across the field with improved performance over static reconstruction methods and less stringent computational complexity scaling laws. Starting from our previous work [1], we use stochastic time-progression models coupled to approximate sparse measurement operators to outline a suitable SA-LQG formulation capable of delivering near optimal correction. Under the spatio-angular framework the wave-fronts are never explicitly estimated in the volume,providing considerable computational savings on 10m-class telescopes and beyond. We find that for Raven, a 10m-class MOAO system with two science channels, the SA-LQG improves the limiting magnitude by two stellar magnitudes when both Strehl-ratio and Ensquared-energy are used as figures of merit. The sky-coverage is therefore improved by a factor of 5.Comment: 30 pages, 7 figures, submitted to Applied Optic

    Inefficiency of classically simulating linear optical quantum computing with Fock-state inputs

    Get PDF
    Aaronson and Arkhipov recently used computational complexity theory to argue that classical computers very likely cannot efficiently simulate linear, multimode, quantum-optical interferometers with arbitrary Fock-state inputs [Aaronson and Arkhipov, Theory Comput. 9, 143 (2013)]. Here we present an elementary argument that utilizes only techniques from quantum optics. We explicitly construct the Hilbert space for such an interferometer and show that its dimension scales exponentially with all the physical resources. We also show in a simple example just how the Schr\"odinger and Heisenberg pictures of quantum theory, while mathematically equivalent, are not in general computationally equivalent. Finally, we conclude our argument by comparing the symmetry requirements of multiparticle bosonic to fermionic interferometers and, using simple physical reasoning, connect the nonsimulatability of the bosonic device to the complexity of computing the permanent of a large matrix.Comment: 7 pages, 1 figure Published in PRA Phys. Rev. A 89, 022328 (2014

    The Equivalence of Sampling and Searching

    Get PDF
    In a sampling problem, we are given an input x, and asked to sample approximately from a probability distribution D_x. In a search problem, we are given an input x, and asked to find a member of a nonempty set A_x with high probability. (An example is finding a Nash equilibrium.) In this paper, we use tools from Kolmogorov complexity and algorithmic information theory to show that sampling and search problems are essentially equivalent. More precisely, for any sampling problem S, there exists a search problem R_S such that, if C is any "reasonable" complexity class, then R_S is in the search version of C if and only if S is in the sampling version. As one application, we show that SampP=SampBQP if and only if FBPP=FBQP: in other words, classical computers can efficiently sample the output distribution of every quantum circuit, if and only if they can efficiently solve every search problem that quantum computers can solve. A second application is that, assuming a plausible conjecture, there exists a search problem R that can be solved using a simple linear-optics experiment, but that cannot be solved efficiently by a classical computer unless the polynomial hierarchy collapses. That application will be described in a forthcoming paper with Alex Arkhipov on the computational complexity of linear optics.Comment: 16 page
    corecore