38,490 research outputs found

    Optimal Thresholds for GMD Decoding with (L+1)/L-extended Bounded Distance Decoders

    Full text link
    We investigate threshold-based multi-trial decoding of concatenated codes with an inner Maximum-Likelihood decoder and an outer error/erasure (L+1)/L-extended Bounded Distance decoder, i.e. a decoder which corrects e errors and t erasures if e(L+1)/L + t <= d - 1, where d is the minimum distance of the outer code and L is a positive integer. This is a generalization of Forney's GMD decoding, which was considered only for L = 1, i.e. outer Bounded Minimum Distance decoding. One important example for (L+1)/L-extended Bounded Distance decoders is decoding of L-Interleaved Reed-Solomon codes. Our main contribution is a threshold location formula, which allows to optimally erase unreliable inner decoding results, for a given number of decoding trials and parameter L. Thereby, the term optimal means that the residual codeword error probability of the concatenated code is minimized. We give an estimation of this probability for any number of decoding trials.Comment: Accepted for the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, June 13 - 18, 2010. 5 pages, 2 figure

    Binary Message Passing Decoding of Product Codes Based on Generalized Minimum Distance Decoding

    Get PDF
    We propose a binary message passing decoding algorithm for product codes based on generalized minimum distance decoding (GMDD) of the component codes, where the last stage of the GMDD makes a decision based on the Hamming distance metric. The proposed algorithm closes half of the gap between conventional iterative bounded distance decoding (iBDD) and turbo product decoding based on the Chase--Pyndiah algorithm, at the expense of some increase in complexity. Furthermore, the proposed algorithm entails only a limited increase in data flow compared to iBDD.Comment: Invited paper to the 53rd Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, March 2019. arXiv admin note: text overlap with arXiv:1806.1090

    Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding

    Full text link
    Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm (i.e., successive interference cancelation (SIC)). To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during pre-processing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is two-fold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32

    Iterative Bounded Distance Decoding of Product Codes with Scaled Reliability

    Get PDF
    We propose a modified iterative bounded distance decoding of product codes. The proposed algorithm is based on exchanging hard messages iteratively and exploiting channel reliabilities to make hard decisions at each iteration. Performance improvements up to 0.26 dB are achieved

    Error-correcting pairs: a new approach to code-based cryptography

    Get PDF
    International audienceMcEliece proposed the first public-key cryptosystem based on linear error-correcting codes. A code with an efficient bounded distance decoding algorithm is chosen as secret key. It is assumed that the chosen code looks like a random code. The known efficient bounded distance decoding algorithms of the families of codes proposed for code-based cryptography, like Reed-Solomon codes, Goppa codes, alternant codes or algebraic geometry codes, can be described in terms of error-correcting pairs (ECP). That means that, the McEliece cryptosystem is not only based on the intractability of bounded distance decoding but also on the problem of retrieving an error-correcting pair from the public code. In this article we propose the class of codes with a t-ECP whose error-correcting pair that is not easily reconstructed from of a given generator matrix

    Quantum Error Correction beyond the Bounded Distance Decoding Limit

    Full text link
    In this paper, we consider quantum error correction over depolarizing channels with non-binary low-density parity-check codes defined over Galois field of size 2p2^p . The proposed quantum error correcting codes are based on the binary quasi-cyclic CSS (Calderbank, Shor and Steane) codes. The resulting quantum codes outperform the best known quantum codes and surpass the performance limit of the bounded distance decoder. By increasing the size of the underlying Galois field, i.e., 2p2^p, the error floors are considerably improved.Comment: To appear in IEEE Transactions on Information Theor

    The complexity of information set decoding

    Get PDF
    Information set decoding is an algorithm for decoding any linear code. Expressions for the complexity of the procedure that are logarithmically exact for virtually all codes are presented. The expressions cover the cases of complete minimum distance decoding and bounded hard-decision decoding, as well as the important case of bounded soft-decision decoding. It is demonstrated that these results are vastly better than those for the trivial algorithms of searching through all codewords or through all syndromes, and are significantly better than those for any other general algorithm currently known. For codes over large symbol fields, the procedure tends towards a complexity that is subexponential in the symbol size
    corecore