1,377 research outputs found

    Fast Decoders for Topological Quantum Codes

    Full text link
    We present a family of algorithms, combining real-space renormalization methods and belief propagation, to estimate the free energy of a topologically ordered system in the presence of defects. Such an algorithm is needed to preserve the quantum information stored in the ground space of a topologically ordered system and to decode topological error-correcting codes. For a system of linear size L, our algorithm runs in time log L compared to L^6 needed for the minimum-weight perfect matching algorithm previously used in this context and achieves a higher depolarizing error threshold.Comment: 4 pages, 4 figure

    Preparing ground states of quantum many-body systems on a quantum computer

    Full text link
    Preparing the ground state of a system of interacting classical particles is an NP-hard problem. Thus, there is in general no better algorithm to solve this problem than exhaustively going through all N configurations of the system to determine the one with lowest energy, requiring a running time proportional to N. A quantum computer, if it could be built, could solve this problem in time sqrt(N). Here, we present a powerful extension of this result to the case of interacting quantum particles, demonstrating that a quantum computer can prepare the ground state of a quantum system as efficiently as it does for classical systems.Comment: 7 pages, 1 figur

    Quantum error correction benchmarks for continuous weak parity measurements

    Full text link
    We present an experimental procedure to determine the usefulness of a measurement scheme for quantum error correction (QEC). A QEC scheme typically requires the ability to prepare entangled states, to carry out multi-qubit measurements, and to perform certain recovery operations conditioned on measurement outcomes. As a consequence, the experimental benchmark of a QEC scheme is a tall order because it requires the conjuncture of many elementary components. Our scheme opens the path to experimental benchmarks of individual components of QEC. Our numerical simulations show that certain parity measurements realized in circuit quantum electrodynamics are on the verge of being useful for QEC

    Algebraic and information-theoretic conditions for operator quantum error-correction

    Get PDF
    Operator quantum error-correction is a technique for robustly storing quantum information in the presence of noise. It generalizes the standard theory of quantum error-correction, and provides a unified framework for topics such as quantum error-correction, decoherence-free subspaces, and noiseless subsystems. This paper develops (a) easily applied algebraic and information-theoretic conditions which characterize when operator quantum error-correction is feasible; (b) a representation theorem for a class of noise processes which can be corrected using operator quantum error-correction; and (c) generalizations of the coherent information and quantum data processing inequality to the setting of operator quantum error-correction.Comment: 4 page

    Variation in fine-scale genetic structure and local dispersal patterns between peripheral populations of a South American passerine bird

    Get PDF
    Indexación: Scopus.The distribution of suitable habitat influences natal and breeding dispersal at small spatial scales, resulting in strong microgeographic genetic structure. Although environmental variation can promote interpopulation differences in dispersal behavior and local spatial patterns, the effects of distinct ecological conditions on within-species variation in dispersal strategies and in fine-scale genetic structure remain poorly understood. We studied local dispersal and fine-scale genetic structure in the thorn-tailed rayadito (Aphrastura spinicauda), a South American bird that breeds along a wide latitudinal gradient. We combine capture-mark-recapture data from eight breeding seasons and molecular genetics to compare two peripheral populations with contrasting environments in Chile: Navarino Island, a continuous and low density habitat, and Fray Jorge National Park, a fragmented, densely populated and more stressful environment. Natal dispersal showed no sex bias in Navarino but was female-biased in the more dense population in Fray Jorge. In the latter, male movements were restricted, and some birds seemed to skip breeding in their first year, suggesting habitat saturation. Breeding dispersal was limited in both populations, with males being more philopatric than females. Spatial genetic autocorrelation analyzes using 13 polymorphic microsatellite loci confirmed the observed dispersal patterns: a fine-scale genetic structure was only detectable for males in Fray Jorge for distances up to 450 m. Furthermore, two-dimensional autocorrelation analyzes and estimates of genetic relatedness indicated that related males tended to be spatially clustered in this population. Our study shows evidence for context-dependent variation in natal dispersal and corresponding local genetic structure in peripheral populations of this bird. It seems likely that the costs of dispersal are higher in the fragmented and higher density environment in Fray Jorge, particularly for males. The observed differences in microgeographic genetic structure for rayaditos might reflect the genetic consequences of population-specific responses to contrasting environmental pressures near the range limits of its distribution.http://onlinelibrary.wiley.com/doi/10.1002/ece3.3342/epd

    Optimal and Efficient Decoding of Concatenated Quantum Block Codes

    Get PDF
    We consider the problem of optimally decoding a quantum error correction code -- that is to find the optimal recovery procedure given the outcomes of partial "check" measurements on the system. In general, this problem is NP-hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message passing algorithm. We compare the performance of the message passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the 5 qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message passing algorithms in two respects. 1) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel. 2) For noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead.Comment: Published versio

    A new look at the cosmic ray positron fraction

    Get PDF
    The positron fraction in cosmic rays was found to be a steadily increasing in function of energy, above \sim 10 GeV. This behaviour contradicts standard astrophysical mechanisms, in which positrons are secondary particles, produced in the interactions of primary cosmic rays during the propagation in the interstellar medium. The observed anomaly in the positron fraction triggered a lot of excitement, as it could be interpreted as an indirect signature of the presence of dark matter species in the Galaxy. Alternatively, it could be produced by nearby astrophysical sources, such as pulsars. Both hypotheses are probed in this work in light of the latest AMS-02 positron fraction measurements. The transport of the primary and secondary positrons in the Galaxy is described using a semi-analytic two-zone model. MicrOMEGAs is used to model the positron flux generated by dark matter species. The description of the positron fraction from astrophysical sources is based on the pulsar observations included in the ATNF catalogue. We find that the mass of the favoured dark matter candidates is always larger than 500 GeV. The only dark matter species that fulfils the numerous gamma ray and cosmic microwave background bounds is a particle annihilating into four leptons through a light scalar or vector mediator, with a mixture of tau (75%) and electron (25%) channels, and a mass between 0.5 and 1 TeV. The positron anomaly can also be explained by a single astrophysical source and a list of five pulsars from the ATNF catalogue is given. Those results are obtained with the cosmic ray transport parameters that best fit the B/C ratio. Uncertainties in the propagation parameters turn out to be very significant. In the WIMP annihilation cross section to mass plane for instance, they overshadow the error contours derived from the positron data.Comment: 20 pages, 16 figures, accepted for publication in A&A, corresponds to published versio

    Simulating Particle Dispersions in Nematic Liquid-Crystal Solvents

    Full text link
    A new method is presented for mesoscopic simulations of particle dispersions in nematic liquid crystal solvents. It allows efficient first-principle simulations of the dispersions involving many particles with many-body interactions mediated by the solvents. A simple demonstration is shown for the aggregation process of a two dimentional dispersion.Comment: 5 pages, 5 figure

    Markov entropy decomposition: a variational dual for quantum belief propagation

    Full text link
    We present a lower bound for the free energy of a quantum many-body system at finite temperature. This lower bound is expressed as a convex optimization problem with linear constraints, and is derived using strong subadditivity of von Neumann entropy and a relaxation of the consistency condition of local density operators. The dual to this minimization problem leads to a set of quantum belief propagation equations, thus providing a firm theoretical foundation to that approach. The minimization problem is numerically tractable, and we find good agreement with quantum Monte Carlo for the spin-half Heisenberg anti-ferromagnet in two dimensions. This lower bound complements other variational upper bounds. We discuss applications to Hamiltonian complexity theory and give a generalization of the structure theorem of Hayden, Jozsa, Petz and Winter to trees in an appendix

    Many-body Theory vs Simulations for the pseudogap in the Hubbard model

    Full text link
    The opening of a critical-fluctuation induced pseudogap (or precursor pseudogap) in the one-particle spectral weight of the half-filled two-dimensional Hubbard model is discussed. This pseudogap, appearing in our Monte Carlo simulations, may be obtained from many-body techniques that use Green functions and vertex corrections that are at the same level of approximation. Self-consistent theories of the Eliashberg type (such as the Fluctuation Exchange Approximation) use renormalized Green functions and bare vertices in a context where there is no Migdal theorem. They do not find the pseudogap, in quantitative and qualitative disagreement with simulations, suggesting these methods are inadequate for this problem. Differences between precursor pseudogaps and strong-coupling pseudogaps are also discussed.Comment: Accepted, Phys. Rev. B15 15Mar00. Expanded version of original submission, Latex, 8 pages, epsfig, 5 eps figures (Last one new). Discussion on fluctuation and strong coupling induced pseudogaps expande
    corecore