79,136 research outputs found

    Thermalization, Error-Correction, and Memory Lifetime for Ising Anyon Systems

    Full text link
    We consider two-dimensional lattice models that support Ising anyonic excitations and are coupled to a thermal bath. We propose a phenomenological model for the resulting short-time dynamics that includes pair-creation, hopping, braiding, and fusion of anyons. By explicitly constructing topological quantum error-correcting codes for this class of system, we use our thermalization model to estimate the lifetime of the quantum information stored in the encoded spaces. To decode and correct errors in these codes, we adapt several existing topological decoders to the non-Abelian setting. We perform large-scale numerical simulations of these two-dimensional Ising anyon systems and find that the thresholds of these models range between 13% to 25%. To our knowledge, these are the first numerical threshold estimates for quantum codes without explicit additive structure.Comment: 34 pages, 9 figures; v2 matches the journal version and corrects a misstatement about the detailed balance condition of our Metropolis simulations. All conclusions from v1 are unaffected by this correctio

    Complexity, parallel computation and statistical physics

    Full text link
    The intuition that a long history is required for the emergence of complexity in natural systems is formalized using the notion of depth. The depth of a system is defined in terms of the number of parallel computational steps needed to simulate it. Depth provides an objective, irreducible measure of history applicable to systems of the kind studied in statistical physics. It is argued that physical complexity cannot occur in the absence of substantial depth and that depth is a useful proxy for physical complexity. The ideas are illustrated for a variety of systems in statistical physics.Comment: 21 pages, 7 figure

    Parallel Algorithm and Dynamic Exponent for Diffusion-limited Aggregation

    Full text link
    A parallel algorithm for ``diffusion-limited aggregation'' (DLA) is described and analyzed from the perspective of computational complexity. The dynamic exponent z of the algorithm is defined with respect to the probabilistic parallel random-access machine (PRAM) model of parallel computation according to T∼LzT \sim L^{z}, where L is the cluster size, T is the running time, and the algorithm uses a number of processors polynomial in L\@. It is argued that z=D-D_2/2, where D is the fractal dimension and D_2 is the second generalized dimension. Simulations of DLA are carried out to measure D_2 and to test scaling assumptions employed in the complexity analysis of the parallel algorithm. It is plausible that the parallel algorithm attains the minimum possible value of the dynamic exponent in which case z characterizes the intrinsic history dependence of DLA.Comment: 24 pages Revtex and 2 figures. A major improvement to the algorithm and smaller dynamic exponent in this versio

    Stochastic gauge: a new technique for quantum simulations

    Full text link
    We review progress towards direct simulation of quantum dynamics in many-body systems, using recently developed stochastic gauge techniques. We consider master equations, canonical ensemble calculations and reversible quantum dynamics are compared, as well the general question of strategies for choosing the gauge.Comment: 11 pages, 2 figures, to be published in Proceedings of the 16th International Conference on Laser Spectroscopy (ICOLS), Palm Cove, Australia (2003

    Track clustering with a quantum annealer for primary vertex reconstruction at hadron colliders

    Full text link
    Clustering of charged particle tracks along the beam axis is the first step in reconstructing the positions of hadronic interactions, also known as primary vertices, at hadron collider experiments. We use a 2036 qubit D-Wave quantum annealer to perform track clustering in a limited capacity on artificial events where the positions of primary vertices and tracks resemble those measured by the Compact Muon Solenoid experiment at the Large Hadron Collider. The algorithm, which is not a classical-quantum hybrid but relies entirely on quantum annealing, is tested on a variety of event topologies from 2 primary vertices and 10 tracks up to 5 primary vertices and 15 tracks. It is benchmarked against simulated annealing executed on a commercial CPU constrained to the same processor time per anneal as time in the physical annealer, and performance is found to be comparable for small numbers of vertices with an intriguing advantage noted for 2 vertices and 16 tracks
    • …
    corecore