576 research outputs found

    Quantum rejection sampling

    Full text link
    Rejection sampling is a well-known method to sample from a target distribution, given the ability to sample from a given distribution. The method has been first formalized by von Neumann (1951) and has many applications in classical computing. We define a quantum analogue of rejection sampling: given a black box producing a coherent superposition of (possibly unknown) quantum states with some amplitudes, the problem is to prepare a coherent superposition of the same states, albeit with different target amplitudes. The main result of this paper is a tight characterization of the query complexity of this quantum state generation problem. We exhibit an algorithm, which we call quantum rejection sampling, and analyze its cost using semidefinite programming. Our proof of a matching lower bound is based on the automorphism principle which allows to symmetrize any algorithm over the automorphism group of the problem. Our main technical innovation is an extension of the automorphism principle to continuous groups that arise for quantum state generation problems where the oracle encodes unknown quantum states, instead of just classical data. Furthermore, we illustrate how quantum rejection sampling may be used as a primitive in designing quantum algorithms, by providing three different applications. We first show that it was implicitly used in the quantum algorithm for linear systems of equations by Harrow, Hassidim and Lloyd. Secondly, we show that it can be used to speed up the main step in the quantum Metropolis sampling algorithm by Temme et al.. Finally, we derive a new quantum algorithm for the hidden shift problem of an arbitrary Boolean function and relate its query complexity to "water-filling" of the Fourier spectrum.Comment: 19 pages, 5 figures, minor changes and a more compact style (to appear in proceedings of ITCS 2012

    Information-theoretic interpretation of quantum error-correcting codes

    Get PDF
    Quantum error-correcting codes are analyzed from an information-theoretic perspective centered on quantum conditional and mutual entropies. This approach parallels the description of classical error correction in Shannon theory, while clarifying the differences between classical and quantum codes. More specifically, it is shown how quantum information theory accounts for the fact that "redundant" information can be distributed over quantum bits even though this does not violate the quantum "no-cloning" theorem. Such a remarkable feature, which has no counterpart for classical codes, is related to the property that the ternary mutual entropy vanishes for a tripartite system in a pure state. This information-theoretic description of quantum coding is used to derive the quantum analogue of the Singleton bound on the number of logical bits that can be preserved by a code of fixed length which can recover a given number of errors.Comment: 14 pages RevTeX, 8 Postscript figures. Added appendix. To appear in Phys. Rev.

    Quantum Stabilizer Codes and Classical Linear Codes

    Full text link
    We show that within any quantum stabilizer code there lurks a classical binary linear code with similar error-correcting capabilities, thereby demonstrating new connections between quantum codes and classical codes. Using this result -- which applies to degenerate as well as nondegenerate codes -- previously established necessary conditions for classical linear codes can be easily translated into necessary conditions for quantum stabilizer codes. Examples of specific consequences are: for a quantum channel subject to a delta-fraction of errors, the best asymptotic capacity attainable by any stabilizer code cannot exceed H(1/2 + sqrt(2*delta*(1-2*delta))); and, for the depolarizing channel with fidelity parameter delta, the best asymptotic capacity attainable by any stabilizer code cannot exceed 1-H(delta).Comment: 17 pages, ReVTeX, with two figure

    Fast Quantum Modular Exponentiation

    Full text link
    We present a detailed analysis of the impact on modular exponentiation of architectural features and possible concurrent gate execution. Various arithmetic algorithms are evaluated for execution time, potential concurrency, and space tradeoffs. We find that, to exponentiate an n-bit number, for storage space 100n (twenty times the minimum 5n), we can execute modular exponentiation two hundred to seven hundred times faster than optimized versions of the basic algorithms, depending on architecture, for n=128. Addition on a neighbor-only architecture is limited to O(n) time when non-neighbor architectures can reach O(log n), demonstrating that physical characteristics of a computing device have an important impact on both real-world running time and asymptotic behavior. Our results will help guide experimental implementations of quantum algorithms and devices.Comment: to appear in PRA 71(5); RevTeX, 12 pages, 12 figures; v2 revision is substantial, with new algorithmic variants, much shorter and clearer text, and revised equation formattin

    Non-adaptive Measurement-based Quantum Computation and Multi-party Bell Inequalities

    Full text link
    Quantum correlations exhibit behaviour that cannot be resolved with a local hidden variable picture of the world. In quantum information, they are also used as resources for information processing tasks, such as Measurement-based Quantum Computation (MQC). In MQC, universal quantum computation can be achieved via adaptive measurements on a suitable entangled resource state. In this paper, we look at a version of MQC in which we remove the adaptivity of measurements and aim to understand what computational abilities still remain in the resource. We show that there are explicit connections between this model of computation and the question of non-classicality in quantum correlations. We demonstrate this by focussing on deterministic computation of Boolean functions, in which natural generalisations of the Greenberger-Horne-Zeilinger (GHZ) paradox emerge; we then explore probabilistic computation, via which multipartite Bell Inequalities can be defined. We use this correspondence to define families of multi-party Bell inequalities, which we show to have a number of interesting contrasting properties.Comment: 13 pages, 4 figures, final version accepted for publicatio

    Spitzer Infrared Spectrograph Observations of M, L, and T Dwarfs

    Full text link
    We present the first mid-infrared spectra of brown dwarfs, together with observations of a low-mass star. Our targets are the M3.5 dwarf GJ 1001A, the L8 dwarf DENIS-P J0255-4700, and the T1/T6 binary system epsilon Indi Ba/Bb. As expected, the mid-infrared spectral morphology of these objects changes rapidly with spectral class due to the changes in atmospheric chemistry resulting from their differing effective temperatures and atmospheric structures. By taking advantage of the unprecedented sensitivity of the Infrared Spectrograph on the Spitzer Space Telescope we have detected the 7.8 micron methane and 10 micron ammonia bands for the first time in brown dwarf spectra.Comment: 4 pages, 2 figure

    Localization, Coulomb interactions and electrical heating in single-wall carbon nanotubes/polymer composites

    Full text link
    Low field and high field transport properties of carbon nanotubes/polymer composites are investigated for different tube fractions. Above the percolation threshold f_c=0.33%, transport is due to hopping of localized charge carriers with a localization length xi=10-30 nm. Coulomb interactions associated with a soft gap Delta_CG=2.5 meV are present at low temperature close to f_c. We argue that it originates from the Coulomb charging energy effect which is partly screened by adjacent bundles. The high field conductivity is described within an electrical heating scheme. All the results suggest that using composites close to the percolation threshold may be a way to access intrinsic properties of the nanotubes by experiments at a macroscopic scale.Comment: 4 pages, 5 figures, Submitted to Phys. Rev.

    Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction

    Full text link
    With the unprecedented photometric precision of the Kepler Spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends and other instrumental signatures, obscure astrophysical signals. The Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed in Kepler data poses a significant problem to standard cotrending methods such as SYSREM and TFA. Variable stars are often of particular astrophysical interest so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian Maximum A Posteriori (MAP) approach where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is in turn used to establish a range of "reasonable" robust fit parameters. These robust fit parameters are then used to generate a Bayesian Prior and a Bayesian Posterior Probability Distribution Function (PDF) which when maximized finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection which commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves.Comment: 43 pages, 21 figures, Submitted for publication in PASP. Also see companion paper "Kepler Presearch Data Conditioning I - Architecture and Algorithms for Error Correction in Kepler Light Curves" by Martin C. Stumpe, et a

    Not Just a Theory--The Utility of Mathematical Models in Evolutionary Biology

    Get PDF
    Progress in science often begins with verbal hypotheses meant to explain why certain biological phenomena exist. An important purpose of mathematical models in evolutionary research, as in many other fields, is to act as “proof-of-concept” tests of the logic in verbal explanations, paralleling the way in which empirical data are used to test hypotheses. Because not all subfields of biology use mathematics for this purpose, misunderstandings of the function of proof-of-concept modeling are common. In the hope of facilitating communication, we discuss the role of proof-of-concept modeling in evolutionary biology
    • …
    corecore