700 research outputs found

    On the guaranteed error correction capability of LDPC codes

    Full text link
    We investigate the relation between the girth and the guaranteed error correction capability of γ\gamma-left regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms. A lower bound on the number of variable nodes which expand by a factor of at least 3γ/43 \gamma/4 is found based on the Moore bound. An upper bound on the guaranteed correction capability is established by studying the sizes of smallest possible trapping sets.Comment: 5 pages, submitted to IEEE International Symposium on Information Theory (ISIT), 200

    On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes

    Full text link
    The relation between the girth and the guaranteed error correction capability of γ\gamma-left regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms is investigated. A lower bound on the size of variable node sets which expand by a factor of at least 3γ/43 \gamma/4 is found based on the Moore bound. An upper bound on the guaranteed error correction capability is established by studying the sizes of smallest possible trapping sets. The results are extended to generalized LDPC codes. It is shown that generalized LDPC codes can correct a linear fraction of errors under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. It is also shown that the bound cannot be improved when γ\gamma is even by studying a class of trapping sets. A lower bound on the size of variable node sets which have the required expansion is established.Comment: 17 pages. Submitted to IEEE Transactions on Information Theory. Parts of this work have been accepted for presentation at the International Symposium on Information Theory (ISIT'08) and the International Telemetering Conference (ITC'08

    Check-hybrid GLDPC Codes: Systematic Elimination of Trapping Sets and Guaranteed Error Correction Capability

    Full text link
    In this paper, we propose a new approach to construct a class of check-hybrid generalized low-density parity-check (CH-GLDPC) codes which are free of small trapping sets. The approach is based on converting some selected check nodes involving a trapping set into super checks corresponding to a 2-error correcting component code. Specifically, we follow two main purposes to construct the check-hybrid codes; first, based on the knowledge of the trapping sets of the global LDPC code, single parity checks are replaced by super checks to disable the trapping sets. We show that by converting specified single check nodes, denoted as critical checks, to super checks in a trapping set, the parallel bit flipping (PBF) decoder corrects the errors on a trapping set and hence eliminates the trapping set. The second purpose is to minimize the rate loss caused by replacing the super checks through finding the minimum number of such critical checks. We also present an algorithm to find critical checks in a trapping set of column-weight 3 LDPC code and then provide upper bounds on the minimum number of such critical checks such that the decoder corrects all error patterns on elementary trapping sets. Moreover, we provide a fixed set for a class of constructed check-hybrid codes. The guaranteed error correction capability of the CH-GLDPC codes is also studied. We show that a CH-GLDPC code in which each variable node is connected to 2 super checks corresponding to a 2-error correcting component code corrects up to 5 errors. The results are also extended to column-weight 4 LDPC codes. Finally, we investigate the eliminating of trapping sets of a column-weight 3 LDPC code using the Gallager B decoding algorithm and generalize the results obtained for the PBF for the Gallager B decoding algorithm

    Two-Bit Bit Flipping Decoding of LDPC Codes

    Full text link
    In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.Comment: 6 pages. Submitted to IEEE International Symposium on Information Theory 201

    Construction of Near-Optimum Burst Erasure Correcting Low-Density Parity-Check Codes

    Full text link
    In this paper, a simple, general-purpose and effective tool for the design of low-density parity-check (LDPC) codes for iterative correction of bursts of erasures is presented. The design method consists in starting from the parity-check matrix of an LDPC code and developing an optimized parity-check matrix, with the same performance on the memory-less erasure channel, and suitable also for the iterative correction of single bursts of erasures. The parity-check matrix optimization is performed by an algorithm called pivot searching and swapping (PSS) algorithm, which executes permutations of carefully chosen columns of the parity-check matrix, after a local analysis of particular variable nodes called stopping set pivots. This algorithm can be in principle applied to any LDPC code. If the input parity-check matrix is designed for achieving good performance on the memory-less erasure channel, then the code obtained after the application of the PSS algorithm provides good joint correction of independent erasures and single erasure bursts. Numerical results are provided in order to show the effectiveness of the PSS algorithm when applied to different categories of LDPC codes.Comment: 15 pages, 4 figures. IEEE Trans. on Communications, accepted (submitted in Feb. 2007
    corecore