3,893 research outputs found

    Refined Upper Bounds on Stopping Redundancy of Binary Linear Codes

    Full text link
    The ll-th stopping redundancy ρl(C)\rho_l(\mathcal C) of the binary [n,k,d][n, k, d] code C\mathcal C, 1ld1 \le l \le d, is defined as the minimum number of rows in the parity-check matrix of C\mathcal C, such that the smallest stopping set is of size at least ll. The stopping redundancy ρ(C)\rho(\mathcal C) is defined as ρd(C)\rho_d(\mathcal C). In this work, we improve on the probabilistic analysis of stopping redundancy, proposed by Han, Siegel and Vardy, which yields the best bounds known today. In our approach, we judiciously select the first few rows in the parity-check matrix, and then continue with the probabilistic method. By using similar techniques, we improve also on the best known bounds on ρl(C)\rho_l(\mathcal C), for 1ld1 \le l \le d. Our approach is compared to the existing methods by numerical computations.Comment: 5 pages; ITW 201

    The Trapping Redundancy of Linear Block Codes

    Full text link
    We generalize the notion of the stopping redundancy in order to study the smallest size of a trapping set in Tanner graphs of linear block codes. In this context, we introduce the notion of the trapping redundancy of a code, which quantifies the relationship between the number of redundant rows in any parity-check matrix of a given code and the size of its smallest trapping set. Trapping sets with certain parameter sizes are known to cause error-floors in the performance curves of iterative belief propagation decoders, and it is therefore important to identify decoding matrices that avoid such sets. Bounds on the trapping redundancy are obtained using probabilistic and constructive methods, and the analysis covers both general and elementary trapping sets. Numerical values for these bounds are computed for the [2640,1320] Margulis code and the class of projective geometry codes, and compared with some new code-specific trapping set size estimates.Comment: 12 pages, 4 tables, 1 figure, accepted for publication in IEEE Transactions on Information Theor

    Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes

    Full text link
    We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the trade-off between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on Information Theor

    Optimization of Parity-Check Matrices of LDPC Codes

    Get PDF
    Madala tihedusega paarsuskontroll (LDPC) on laialdaselt kasutusel kommunikatsioonis tänu oma suurepärasele praktilisele võimekusele. LDPC koodi vigade tõenäosust iteratiivse dekodeerimise puhul binaarsel kustutuskanalil määrab klass kombinatoorseid objekte, nimega peatamise rühm. Väikese suurusega peatamise rühmad on dekodeerija vigade põhjuseks. Peatamise liiasust määratletakse kui minimaalset ridade arvu paarsuskontrolli koodi maatriksis, mille puhul pole selles väikesi peatuse rühmi. Han, Siegel ja Vardy kasutavad üld binaarse lineaarkoodi ülemise piiri peatamiste liiasuse tuletamiseks tõenäosuslikku analüüsi. Need piirid on teadaolevalt parimad paljude koodi perekondade puhul. Selles töös me parendame Hani, Siegeli ja Vardy tulemusi modifitseerides selleks nende analüüsi. Meie lähenemine erineb sellepoolest, et me valime mõistlikult esimese ja teise rea paarsuskontrolli maatriksis ja siis läheme edasi tõenäosusliku analüüsiga. Numbrilised väärtused kinnitavad seda, et piirid mis on määratletud selles töös on paremad Hani, Siegeli ja Vardy omadest kahe koodi puhul: laiendatud Golay koodis ja kvadraatses jääk koodis pikkusega 48.Low-density parity-check (LDPC) codes are widely used in communications due to their excellent practical performance. Error probability of LDPC code under iterative decoding on the binary erasure channel is determined by a class of combinatorial objects, called stopping sets. Stopping sets of small size are the reason for the decoder failures. Stopping redundancy is defined as the minimum number of rows in a parity-check matrix of the code, such that there are no small stopping sets in it. Han, Siegel and Vardy derive upper bounds on the stopping redundancy of general binary linear codes by using probabilistic analysis. For many families of codes, these bounds are the best currently known. In this work, we improve on the results of Han, Siegel and Vardy by modifying their analysis. Our approach is different in that we judiciously select the first and the second rows in the parity-check matrix, and then proceed with the probabilistic analysis. Numerical experiments confirm that the bounds obtained in this thesis are superior to those of Han, Siegel and Vardy for two codes: the extended Golay code and the quadratic residue code of length 48

    Stopping Set Distributions of Some Linear Codes

    Full text link
    Stopping sets and stopping set distribution of an low-density parity-check code are used to determine the performance of this code under iterative decoding over a binary erasure channel (BEC). Let CC be a binary [n,k][n,k] linear code with parity-check matrix HH, where the rows of HH may be dependent. A stopping set SS of CC with parity-check matrix HH is a subset of column indices of HH such that the restriction of HH to SS does not contain a row of weight one. The stopping set distribution {Ti(H)}i=0n\{T_i(H)\}_{i=0}^n enumerates the number of stopping sets with size ii of CC with parity-check matrix HH. Note that stopping sets and stopping set distribution are related to the parity-check matrix HH of CC. Let HH^{*} be the parity-check matrix of CC which is formed by all the non-zero codewords of its dual code CC^{\perp}. A parity-check matrix HH is called BEC-optimal if Ti(H)=Ti(H),i=0,1,...,nT_i(H)=T_i(H^*), i=0,1,..., n and HH has the smallest number of rows. On the BEC, iterative decoder of CC with BEC-optimal parity-check matrix is an optimal decoder with much lower decoding complexity than the exhaustive decoder. In this paper, we study stopping sets, stopping set distributions and BEC-optimal parity-check matrices of binary linear codes. Using finite geometry in combinatorics, we obtain BEC-optimal parity-check matrices and then determine the stopping set distributions for the Simplex codes, the Hamming codes, the first order Reed-Muller codes and the extended Hamming codes.Comment: 33 pages, submitted to IEEE Trans. Inform. Theory, Feb. 201

    On generic erasure correcting sets and related problems

    Full text link
    Motivated by iterative decoding techniques for the binary erasure channel Hollmann and Tolhuizen introduced and studied the notion of generic erasure correcting sets for linear codes. A generic (r,s)(r,s)--erasure correcting set generates for all codes of codimension rr a parity check matrix that allows iterative decoding of all correctable erasure patterns of size ss or less. The problem is to derive bounds on the minimum size F(r,s)F(r,s) of generic erasure correcting sets and to find constructions for such sets. In this paper we continue the study of these sets. We derive better lower and upper bounds. Hollmann and Tolhuizen also introduced the stronger notion of (r,s)(r,s)--sets and derived bounds for their minimum size G(r,s)G(r,s). Here also we improve these bounds. We observe that these two conceps are closely related to so called ss--wise intersecting codes, an area, in which G(r,s)G(r,s) has been studied primarily with respect to ratewise performance. We derive connections. Finally, we observed that hypergraph covering can be used for both problems to derive good upper bounds.Comment: 9 pages, to appear in IEEE Transactions on Information Theor

    Invertible Bloom Lookup Tables with Listing Guarantees

    Full text link
    The Invertible Bloom Lookup Table (IBLT) is a probabilistic concise data structure for set representation that supports a listing operation as the recovery of the elements in the represented set. Its applications can be found in network synchronization and traffic monitoring as well as in error-correction codes. IBLT can list its elements with probability affected by the size of the allocated memory and the size of the represented set, such that it can fail with small probability even for relatively small sets. While previous works only studied the failure probability of IBLT, this work initiates the worst case analysis of IBLT that guarantees successful listing for all sets of a certain size. The worst case study is important since the failure of IBLT imposes high overhead. We describe a novel approach that guarantees successful listing when the set satisfies a tunable upper bound on its size. To allow that, we develop multiple constructions that are based on various coding techniques such as stopping sets and the stopping redundancy of error-correcting codes, Steiner systems, and covering arrays as well as new methodologies we develop. We analyze the sizes of IBLTs with listing guarantees obtained by the various methods as well as their mapping memory consumption. Lastly, we study lower bounds on the achievable sizes of IBLT with listing guarantees and verify the results in the paper by simulations

    Density Evolution and Functional Threshold for the Noisy Min-Sum Decoder

    Full text link
    This paper investigates the behavior of the Min-Sum decoder running on noisy devices. The aim is to evaluate the robustness of the decoder in the presence of computation noise, e.g. due to faulty logic in the processing units, which represents a new source of errors that may occur during the decoding process. To this end, we first introduce probabilistic models for the arithmetic and logic units of the the finite-precision Min-Sum decoder, and then carry out the density evolution analysis of the noisy Min-Sum decoder. We show that in some particular cases, the noise introduced by the device can help the Min-Sum decoder to escape from fixed points attractors, and may actually result in an increased correction capacity with respect to the noiseless decoder. We also reveal the existence of a specific threshold phenomenon, referred to as functional threshold. The behavior of the noisy decoder is demonstrated in the asymptotic limit of the code-length -- by using "noisy" density evolution equations -- and it is also verified in the finite-length case by Monte-Carlo simulation.Comment: 46 pages (draft version); extended version of the paper with same title, submitted to IEEE Transactions on Communication
    corecore