8,080 research outputs found

    Graph Concatenation for Quantum Codes

    Get PDF
    Graphs are closely related to quantum error-correcting codes: every stabilizer code is locally equivalent to a graph code, and every codeword stabilized code can be described by a graph and a classical code. For the construction of good quantum codes of relatively large block length, concatenated quantum codes and their generalizations play an important role. We develop a systematic method for constructing concatenated quantum codes based on "graph concatenation", where graphs representing the inner and outer codes are concatenated via a simple graph operation called "generalized local complementation." Our method applies to both binary and non-binary concatenated quantum codes as well as their generalizations.Comment: 26 pages, 12 figures. Figures of concatenated [[5,1,3]] and [[7,1,3]] are added. Submitted to JM

    On the Construction and Decoding of Concatenated Polar Codes

    Full text link
    A scheme for concatenating the recently invented polar codes with interleaved block codes is considered. By concatenating binary polar codes with interleaved Reed-Solomon codes, we prove that the proposed concatenation scheme captures the capacity-achieving property of polar codes, while having a significantly better error-decay rate. We show that for any ϵ>0\epsilon > 0, and total frame length NN, the parameters of the scheme can be set such that the frame error probability is less than 2N1ϵ2^{-N^{1-\epsilon}}, while the scheme is still capacity achieving. This improves upon 2^{-N^{0.5-\eps}}, the frame error probability of Arikan's polar codes. We also propose decoding algorithms for concatenated polar codes, which significantly improve the error-rate performance at finite block lengths while preserving the low decoding complexity

    The Error-Pattern-Correcting Turbo Equalizer

    Full text link
    The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE Transactions on Information Theory titled: "Facets of Coding Theory: from Algorithms to Networks". This work was supported in part by the NSF Theoretical Foundation Grant 0728676

    End-to-End Error-Correcting Codes on Networks with Worst-Case Symbol Errors

    Full text link
    The problem of coding for networks experiencing worst-case symbol errors is considered. We argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. A new transform metric for errors under the considered model is proposed. Using this metric, we replicate many of the classical results from coding theory. Specifically, we prove new Hamming-type, Plotkin-type, and Elias-Bassalygo-type upper bounds on the network capacity. A commensurate lower bound is shown based on Gilbert-Varshamov-type codes for error-correction. The GV codes used to attain the lower bound can be non-coherent, that is, they do not require prior knowledge of the network topology. We also propose a computationally-efficient concatenation scheme. The rate achieved by our concatenated codes is characterized by a Zyablov-type lower bound. We provide a generalized minimum-distance decoding algorithm which decodes up to half the minimum distance of the concatenated codes. The end-to-end nature of our design enables our codes to be overlaid on the classical distributed random linear network codes [1]. Furthermore, the potentially intensive computation at internal nodes for the link-by-link error-correction is un-necessary based on our design.Comment: Submitted for publication. arXiv admin note: substantial text overlap with arXiv:1108.239

    Quantum Channel Capacity of Very Noisy Channels

    Full text link
    We present a family of additive quantum error-correcting codes whose capacities exceeds that of quantum random coding (hashing) for very noisy channels. These codes provide non-zero capacity in a depolarizing channel for fidelity parameters ff when f>.80944f> .80944. Random coding has non-zero capacity only for f>.81071f>.81071; by analogy to the classical Shannon coding limit, this value had previously been conjectured to be a lower bound. We use the method introduced by Shor and Smolin of concatenating a non-random (cat) code within a random code to obtain good codes. The cat code with block size five is shown to be optimal for single concatenation. The best known multiple-concatenated code we found has a block size of 25. We derive a general relation between the capacity attainable by these concatenation schemes and the coherent information of the inner code states.Comment: 31 pages including epsf postscript figures. Replaced to correct important typographical errors in equations 36, 37 and in tex

    Concatenated Turbo/LDPC codes for deep space communications: performance and implementation

    Get PDF
    Deep space communications require error correction codes able to reach extremely low bit-error-rates, possibly with a steep waterfall region and without error floor. Several schemes have been proposed in the literature to achieve these goals. Most of them rely on the concatenation of different codes that leads to high hardware implementation complexity and poor resource sharing. This work proposes a scheme based on the concatenation of non-custom LDPC and turbo codes that achieves excellent error correction performance. Moreover, since both LDPC and turbo codes can be decoded with the BCJR algorithm, our preliminary results show that an efficient hardware architecture with high resource reuse can be designe
    corecore