33,256 research outputs found

    The Computational Complexity of Linear Optics

    Full text link
    We give new evidence that quantum computers -- moreover, rudimentary quantum computers built entirely out of linear-optical elements -- cannot be efficiently simulated by classical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions. Our first result says that, if there exists a polynomial-time classical algorithm that samples from the same probability distribution as a linear-optical network, then P^#P=BPP^NP, and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation. Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the "Permanent-of-Gaussians Conjecture", which says that it is #P-hard to approximate the permanent of a matrix A of independent N(0,1) Gaussian entries, with high probability over A; and the "Permanent Anti-Concentration Conjecture", which says that |Per(A)|>=sqrt(n!)/poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application. This paper does not assume knowledge of quantum optics. Indeed, part of its goal is to develop the beautiful theory of noninteracting bosons underlying our model, and its connection to the permanent function, in a self-contained way accessible to theoretical computer scientists.Comment: 94 pages, 4 figure

    Spatio-angular Minimum-variance Tomographic Controller for Multi-Object Adaptive Optics systems

    Full text link
    Multi-object astronomical adaptive-optics (MOAO) is now a mature wide-field observation mode to enlarge the adaptive-optics-corrected field in a few specific locations over tens of arc-minutes. The work-scope provided by open-loop tomography and pupil conjugation is amenable to a spatio-angular Linear-Quadratic Gaussian (SA-LQG) formulation aiming to provide enhanced correction across the field with improved performance over static reconstruction methods and less stringent computational complexity scaling laws. Starting from our previous work [1], we use stochastic time-progression models coupled to approximate sparse measurement operators to outline a suitable SA-LQG formulation capable of delivering near optimal correction. Under the spatio-angular framework the wave-fronts are never explicitly estimated in the volume,providing considerable computational savings on 10m-class telescopes and beyond. We find that for Raven, a 10m-class MOAO system with two science channels, the SA-LQG improves the limiting magnitude by two stellar magnitudes when both Strehl-ratio and Ensquared-energy are used as figures of merit. The sky-coverage is therefore improved by a factor of 5.Comment: 30 pages, 7 figures, submitted to Applied Optic

    Requirement for quantum computation

    Get PDF
    We identify "proper quantum computation" with computational processes that cannot be efficiently simulated on a classical computer. For optical quantum computation, we establish "no-go" theorems for classes of quantum optical experiments that cannot yield proper quantum computation, and we identify requirements for optical proper quantum computation that correspond to violations of assumptions underpinning the no-go theorems.Comment: 11 pages, no figure

    Boson sampling with displaced single-photon Fock states versus single-photon-added coherent states---The quantum-classical divide and computational-complexity transitions in linear optics

    Full text link
    Boson sampling is a specific quantum computation, which is likely hard to implement efficiently on a classical computer. The task is to sample the output photon number distribution of a linear optical interferometric network, which is fed with single-photon Fock state inputs. A question that has been asked is if the sampling problems associated with any other input quantum states of light (other than the Fock states) to a linear optical network and suitable output detection strategies are also of similar computational complexity as boson sampling. We consider the states that differ from the Fock states by a displacement operation, namely the displaced Fock states and the photon-added coherent states. It is easy to show that the sampling problem associated with displaced single-photon Fock states and a displaced photon number detection scheme is in the same complexity class as boson sampling for all values of displacement. On the other hand, we show that the sampling problem associated with single-photon-added coherent states and the same displaced photon number detection scheme demonstrates a computational complexity transition. It transitions from being just as hard as boson sampling when the input coherent amplitudes are sufficiently small, to a classically simulatable problem in the limit of large coherent amplitudes.Comment: 7 pages, 3 figures; published versio
    corecore