28 research outputs found

    Hierarchical and High-Girth QC LDPC Codes

    Full text link
    We present a general approach to designing capacity-approaching high-girth low-density parity-check (LDPC) codes that are friendly to hardware implementation. Our methodology starts by defining a new class of "hierarchical" quasi-cyclic (HQC) LDPC codes that generalizes the structure of quasi-cyclic (QC) LDPC codes. Whereas the parity check matrices of QC LDPC codes are composed of circulant sub-matrices, those of HQC LDPC codes are composed of a hierarchy of circulant sub-matrices that are in turn constructed from circulant sub-matrices, and so on, through some number of levels. We show how to map any class of codes defined using a protograph into a family of HQC LDPC codes. Next, we present a girth-maximizing algorithm that optimizes the degrees of freedom within the family of codes to yield a high-girth HQC LDPC code. Finally, we discuss how certain characteristics of a code protograph will lead to inevitable short cycles, and show that these short cycles can be eliminated using a "squashing" procedure that results in a high-girth QC LDPC code, although not a hierarchical one. We illustrate our approach with designed examples of girth-10 QC LDPC codes obtained from protographs of one-sided spatially-coupled codes.Comment: Submitted to IEEE Transactions on Information THeor

    Design and Analysis of Time-Invariant SC-LDPC Convolutional Codes With Small Constraint Length

    Full text link
    In this paper, we deal with time-invariant spatially coupled low-density parity-check convolutional codes (SC-LDPC-CCs). Classic design approaches usually start from quasi-cyclic low-density parity-check (QC-LDPC) block codes and exploit suitable unwrapping procedures to obtain SC-LDPC-CCs. We show that the direct design of the SC-LDPC-CCs syndrome former matrix or, equivalently, the symbolic parity-check matrix, leads to codes with smaller syndrome former constraint lengths with respect to the best solutions available in the literature. We provide theoretical lower bounds on the syndrome former constraint length for the most relevant families of SC-LDPC-CCs, under constraints on the minimum length of cycles in their Tanner graphs. We also propose new code design techniques that approach or achieve such theoretical limits.Comment: 30 pages, 5 figures, accepted for publication in IEEE Transactions on Communication

    Characterization and Efficient Search of Non-Elementary Trapping Sets of LDPC Codes with Applications to Stopping Sets

    Full text link
    In this paper, we propose a characterization for non-elementary trapping sets (NETSs) of low-density parity-check (LDPC) codes. The characterization is based on viewing a NETS as a hierarchy of embedded graphs starting from an ETS. The characterization corresponds to an efficient search algorithm that under certain conditions is exhaustive. As an application of the proposed characterization/search, we obtain lower and upper bounds on the stopping distance smins_{min} of LDPC codes. We examine a large number of regular and irregular LDPC codes, and demonstrate the efficiency and versatility of our technique in finding lower and upper bounds on, and in many cases the exact value of, smins_{min}. Finding smins_{min}, or establishing search-based lower or upper bounds, for many of the examined codes are out of the reach of any existing algorithm

    Construction of multiple-rate QC-LDPC codes using hierarchical row-splitting

    No full text
    In this letter, we propose an improved method called hierarchical row-splitting with edge variation for designing multiple-rate quasi-cyclic low-density parity-check (QC-LDPC) codes, which constructs lower-rate codes from a high-rate mother code by row-splitting operations. Consequently, the obtained QC-LDPC codes with various code rates have the same blocklength and can share common hardware resources to reduce the implementation complexity. Compared with the conventional row-combining-based algorithms, a wider range of code rates are supported. Moreover, each individual rate code could be separately optimized, making it easier to find a set of multiple-rate QC-LDPC codes with good performance for all different rates. Simulation results demonstrate that the obtained codes outperform the counterparts from digital video broadcasting-second generation terrestrial

    An Efficient Algorithm for Counting Cycles in QC and APM LDPC Codes

    Full text link
    In this paper, a new method is given for counting cycles in the Tanner graph of a (Type-I) quasi-cyclic (QC) low-density parity-check (LDPC) code which the complexity mainly is dependent on the base matrix, independent from the CPM-size of the constructed code. Interestingly, for large CPM-sizes, in comparison of the existing methods, this algorithm is the first approach which efficiently counts the cycles in the Tanner graphs of QC-LDPC codes. In fact, the algorithm recursively counts the cycles in the parity-check matrix column-by-column by finding all non-isomorph tailless backtrackless closed (TBC) walks in the base graph and enumerating theoretically their corresponding cycles in the same equivalent class. Moreover, this approach can be modified in few steps to find the cycle distributions of a class of LDPC codes based on Affine permutation matrices (APM-LDPC codes). Interestingly, unlike the existing methods which count the cycles up to 2g−22g-2, where gg is the girth, the proposed algorithm can be used to enumerate the cycles of arbitrary length in the Tanner graph. Moreover, the proposed cycle searching algorithm improves upon various previously known methods, in terms of computational complexity and memory requirements.Comment: 18 pages, 4 figure

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor
    corecore