52 research outputs found

    From Cages to Trapping Sets and Codewords: A Technique to Derive Tight Upper Bounds on the Minimum Size of Trapping Sets and Minimum Distance of LDPC Codes

    Full text link
    Cages, defined as regular graphs with minimum number of nodes for a given girth, are well-studied in graph theory. Trapping sets are graphical structures responsible for error floor of low-density parity-check (LDPC) codes, and are well investigated in coding theory. In this paper, we make connections between cages and trapping sets. In particular, starting from a cage (or a modified cage), we construct a trapping set in multiple steps. Based on the connection between cages and trapping sets, we then use the available results in graph theory on cages and derive tight upper bounds on the size of the smallest trapping sets for variable-regular LDPC codes with a given variable degree and girth. The derived upper bounds in many cases meet the best known lower bounds and thus provide the actual size of the smallest trapping sets. Considering that non-zero codewords are a special case of trapping sets, we also derive tight upper bounds on the minimum weight of such codewords, i.e., the minimum distance, of variable-regular LDPC codes as a function of variable degree and girth

    Characterization and Efficient Search of Non-Elementary Trapping Sets of LDPC Codes with Applications to Stopping Sets

    Full text link
    In this paper, we propose a characterization for non-elementary trapping sets (NETSs) of low-density parity-check (LDPC) codes. The characterization is based on viewing a NETS as a hierarchy of embedded graphs starting from an ETS. The characterization corresponds to an efficient search algorithm that under certain conditions is exhaustive. As an application of the proposed characterization/search, we obtain lower and upper bounds on the stopping distance smins_{min} of LDPC codes. We examine a large number of regular and irregular LDPC codes, and demonstrate the efficiency and versatility of our technique in finding lower and upper bounds on, and in many cases the exact value of, smins_{min}. Finding smins_{min}, or establishing search-based lower or upper bounds, for many of the examined codes are out of the reach of any existing algorithm

    The Trapping Redundancy of Linear Block Codes

    Full text link
    We generalize the notion of the stopping redundancy in order to study the smallest size of a trapping set in Tanner graphs of linear block codes. In this context, we introduce the notion of the trapping redundancy of a code, which quantifies the relationship between the number of redundant rows in any parity-check matrix of a given code and the size of its smallest trapping set. Trapping sets with certain parameter sizes are known to cause error-floors in the performance curves of iterative belief propagation decoders, and it is therefore important to identify decoding matrices that avoid such sets. Bounds on the trapping redundancy are obtained using probabilistic and constructive methods, and the analysis covers both general and elementary trapping sets. Numerical values for these bounds are computed for the [2640,1320] Margulis code and the class of projective geometry codes, and compared with some new code-specific trapping set size estimates.Comment: 12 pages, 4 tables, 1 figure, accepted for publication in IEEE Transactions on Information Theor

    Check-hybrid GLDPC Codes: Systematic Elimination of Trapping Sets and Guaranteed Error Correction Capability

    Full text link
    In this paper, we propose a new approach to construct a class of check-hybrid generalized low-density parity-check (CH-GLDPC) codes which are free of small trapping sets. The approach is based on converting some selected check nodes involving a trapping set into super checks corresponding to a 2-error correcting component code. Specifically, we follow two main purposes to construct the check-hybrid codes; first, based on the knowledge of the trapping sets of the global LDPC code, single parity checks are replaced by super checks to disable the trapping sets. We show that by converting specified single check nodes, denoted as critical checks, to super checks in a trapping set, the parallel bit flipping (PBF) decoder corrects the errors on a trapping set and hence eliminates the trapping set. The second purpose is to minimize the rate loss caused by replacing the super checks through finding the minimum number of such critical checks. We also present an algorithm to find critical checks in a trapping set of column-weight 3 LDPC code and then provide upper bounds on the minimum number of such critical checks such that the decoder corrects all error patterns on elementary trapping sets. Moreover, we provide a fixed set for a class of constructed check-hybrid codes. The guaranteed error correction capability of the CH-GLDPC codes is also studied. We show that a CH-GLDPC code in which each variable node is connected to 2 super checks corresponding to a 2-error correcting component code corrects up to 5 errors. The results are also extended to column-weight 4 LDPC codes. Finally, we investigate the eliminating of trapping sets of a column-weight 3 LDPC code using the Gallager B decoding algorithm and generalize the results obtained for the PBF for the Gallager B decoding algorithm

    An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes

    Full text link
    This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. The algorithm can be used to estimate the error floor of LDPC codes or to be part of the apparatus to design LDPC codes with low error floors. For regular codes, the algorithm is initiated with a set of short cycles as the input. For irregular codes, in addition to short cycles, variable nodes with low degree and cycles with low approximate cycle extrinsic message degree (ACE) are also used as the initial inputs. The initial inputs are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles, low-degree variable nodes and cycles with low ACE. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find other graphical objects, such as absorbing sets and Zyablov-Pinsker (ZP) trapping sets, known to dominate the performance of LDPC codes in the error floor region over different channels and for different iterative decoding algorithms. Simulation results on several LDPC codes demonstrate the accuracy and efficiency of the proposed algorithm. In particular, the algorithm is significantly faster than the existing search algorithms for dominant trapping sets
    • …
    corecore