500 research outputs found

    Ultrahigh Error Threshold for Surface Codes with Biased Noise

    Full text link
    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is in fact at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.Comment: 6 pages, 5 figures, comments welcome; v2 includes minor improvements to the numerical results, additional references, and an extended discussion; v3 published version (incorporating supplementary material into main body of paper

    Tailored codes for small quantum memories

    Get PDF
    We demonstrate that small quantum memories, realized via quantum error correction in multi-qubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10,000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent 7-qubit trapped ion experiment.Comment: 6 pages, 2 figures, supplementary material; v2 published versio

    Toric codes and quantum doubles from two-body Hamiltonians

    Get PDF
    We present here a procedure to obtain the Hamiltonians of the toric code and Kitaev quantum double models as the low-energy limits of entirely two-body Hamiltonians. Our construction makes use of a new type of perturbation gadget based on error-detecting subsystem codes. The procedure is motivated by a projected entangled pair states (PEPS) description of the target models, and reproduces the target models' behavior using only couplings that are natural in terms of the original Hamiltonians. This allows our construction to capture the symmetries of the target models

    Tailoring surface codes for highly biased noise

    Full text link
    The surface code, with a simple modification, exhibits ultra-high error correction thresholds when the noise is biased towards dephasing. Here, we identify features of the surface code responsible for these ultra-high thresholds. We provide strong evidence that the threshold error rate of the surface code tracks the hashing bound exactly for all biases, and show how to exploit these features to achieve significant improvement in logical failure rate. First, we consider the infinite bias limit, meaning pure dephasing. We prove that the error threshold of the modified surface code for pure dephasing noise is 50%50\%, i.e., that all qubits are fully dephased, and this threshold can be achieved by a polynomial time decoding algorithm. We demonstrate that the sub-threshold behavior of the code depends critically on the precise shape and boundary conditions of the code. That is, for rectangular surface codes with standard rough/smooth open boundaries, it is controlled by the parameter g=gcd⁥(j,k)g=\gcd(j,k), where jj and kk are dimensions of the surface code lattice. We demonstrate a significant improvement in logical failure rate with pure dephasing for co-prime codes that have g=1g=1, and closely-related rotated codes, which have a modified boundary. The effect is dramatic: the same logical failure rate achievable with a square surface code and nn physical qubits can be obtained with a co-prime or rotated surface code using only O(n)O(\sqrt{n}) physical qubits. Finally, we use approximate maximum likelihood decoding to demonstrate that this improvement persists for a general Pauli noise biased towards dephasing. In particular, comparing with a square surface code, we observe a significant improvement in logical failure rate against biased noise using a rotated surface code with approximately half the number of physical qubits.Comment: 18+4 pages, 24 figures; v2 includes additional coauthor (ASD) and new results on the performance of surface codes in the finite-bias regime, obtained with beveled surface codes and an improved tensor network decoder; v3 published versio

    Degradation of a quantum directional reference frame as a random walk

    Get PDF
    We investigate if the degradation of a quantum directional reference frame through repeated use can be modeled as a classical direction undergoing a random walk on a sphere. We demonstrate that the behaviour of the fidelity for a degrading quantum directional reference frame, defined as the average probability of correctly determining the orientation of a test system, can be fit precisely using such a model. Physically, the mechanism for the random walk is the uncontrollable back-action on the reference frame due to its use in a measurement of the direction of another system. However, we find that the magnitude of the step size of this random walk is not given by our classical model and must be determined from the full quantum description.Comment: 5 pages, no figures. Comments are welcome. v2: several changes to clarify the key results. v3: journal reference added, acknowledgements and references update

    Morbidity and Mortality After Living Kidney Donation, 1999 2001: Survey of United States Transplant Centers

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75563/1/j.1038-5282.2001.00400.x-i1.pd

    Random subspaces for encryption based on a private shared Cartesian frame

    Full text link
    A private shared Cartesian frame is a novel form of private shared correlation that allows for both private classical and quantum communication. Cryptography using a private shared Cartesian frame has the remarkable property that asymptotically, if perfect privacy is demanded, the private classical capacity is three times the private quantum capacity. We demonstrate that if the requirement for perfect privacy is relaxed, then it is possible to use the properties of random subspaces to nearly triple the private quantum capacity, almost closing the gap between the private classical and quantum capacities.Comment: 9 pages, published versio
    • 

    corecore