433 research outputs found

    Fault Secure Encoder and Decoder for NanoMemory Applications

    Get PDF
    Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD) particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT) is dominated by the failure rate of the encoder and decoder. We prove that Euclidean geometry low-density parity-check (EG-LDPC) codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10^(-18) upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 10^(11) bit/cm^2 with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead

    Tailored codes for small quantum memories

    Get PDF
    We demonstrate that small quantum memories, realized via quantum error correction in multi-qubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10,000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent 7-qubit trapped ion experiment.Comment: 6 pages, 2 figures, supplementary material; v2 published versio

    Design of a fault tolerant airborne digital computer. Volume 1: Architecture

    Get PDF
    This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive

    Multi-path Summation for Decoding 2D Topological Codes

    Get PDF
    Fault tolerance is a prerequisite for scalable quantum computing. Architectures based on 2D topological codes are effective for near-term implementations of fault tolerance. To obtain high performance with these architectures, we require a decoder which can adapt to the wide variety of error models present in experiments. The typical approach to the problem of decoding the surface code is to reduce it to minimum-weight perfect matching in a way that provides a suboptimal threshold error rate, and is specialized to correct a specific error model. Recently, optimal threshold error rates for a variety of error models have been obtained by methods which do not use minimum-weight perfect matching, showing that such thresholds can be achieved in polynomial time. It is an open question whether these results can also be achieved by minimum-weight perfect matching. In this work, we use belief propagation and a novel algorithm for producing edge weights to increase the utility of minimum-weight perfect matching for decoding surface codes. This allows us to correct depolarizing errors using the rotated surface code, obtaining a threshold of 17.76±0.02%17.76 \pm 0.02 \%. This is larger than the threshold achieved by previous matching-based decoders (14.88±0.02%14.88 \pm 0.02 \%), though still below the known upper bound of 18.9%\sim 18.9 \%.Comment: 19 pages, 13 figures, published in Quantum, available at https://quantum-journal.org/papers/q-2018-10-19-102

    Tailored Codes for Small Quantum Memories

    Full text link

    Magic state distillation with punctured polar codes

    Get PDF
    We present a scheme for magic state distillation using punctured polar codes. Our results build on some recent work by Bardet et al. (ISIT, 2016) who discovered that polar codes can be described algebraically as decreasing monomial codes. Using this powerful framework, we construct tri-orthogonal quantum codes (Bravyi et al., PRA, 2012) that can be used to distill magic states for the TT gate. An advantage of these codes is that they permit the use of the successive cancellation decoder whose time complexity scales as O(Nlog(N))O(N\log(N)). We supplement this with numerical simulations for the erasure channel and dephasing channel. We obtain estimates for the dimensions and error rates for the resulting codes for block sizes up to 2202^{20} for the erasure channel and 2162^{16} for the dephasing channel. The dimension of the triply-even codes we obtain is shown to scale like O(N0.8)O(N^{0.8}) for the binary erasure channel at noise rate 0.010.01 and O(N0.84)O(N^{0.84}) for the dephasing channel at noise rate 0.0010.001. The corresponding bit error rates drop to roughly 8×10288\times10^{-28} for the erasure channel and 7×10157 \times 10^{-15} for the dephasing channel respectively.Comment: 18 pages, 4 figure
    corecore