16,195 research outputs found

    Stopping Set Distributions of Some Linear Codes

    Full text link
    Stopping sets and stopping set distribution of an low-density parity-check code are used to determine the performance of this code under iterative decoding over a binary erasure channel (BEC). Let CC be a binary [n,k][n,k] linear code with parity-check matrix HH, where the rows of HH may be dependent. A stopping set SS of CC with parity-check matrix HH is a subset of column indices of HH such that the restriction of HH to SS does not contain a row of weight one. The stopping set distribution {Ti(H)}i=0n\{T_i(H)\}_{i=0}^n enumerates the number of stopping sets with size ii of CC with parity-check matrix HH. Note that stopping sets and stopping set distribution are related to the parity-check matrix HH of CC. Let HH^{*} be the parity-check matrix of CC which is formed by all the non-zero codewords of its dual code CC^{\perp}. A parity-check matrix HH is called BEC-optimal if Ti(H)=Ti(H),i=0,1,...,nT_i(H)=T_i(H^*), i=0,1,..., n and HH has the smallest number of rows. On the BEC, iterative decoder of CC with BEC-optimal parity-check matrix is an optimal decoder with much lower decoding complexity than the exhaustive decoder. In this paper, we study stopping sets, stopping set distributions and BEC-optimal parity-check matrices of binary linear codes. Using finite geometry in combinatorics, we obtain BEC-optimal parity-check matrices and then determine the stopping set distributions for the Simplex codes, the Hamming codes, the first order Reed-Muller codes and the extended Hamming codes.Comment: 33 pages, submitted to IEEE Trans. Inform. Theory, Feb. 201

    Multilevel Generalised Low-Density Parity-Check Codes

    No full text
    Multilevel coding invoking generalised low-density parity-check component codes is proposed, which is capable of outperforming the classic low-density parity check component codes at a reduced decoding latency

    Construction of Near-Optimum Burst Erasure Correcting Low-Density Parity-Check Codes

    Full text link
    In this paper, a simple, general-purpose and effective tool for the design of low-density parity-check (LDPC) codes for iterative correction of bursts of erasures is presented. The design method consists in starting from the parity-check matrix of an LDPC code and developing an optimized parity-check matrix, with the same performance on the memory-less erasure channel, and suitable also for the iterative correction of single bursts of erasures. The parity-check matrix optimization is performed by an algorithm called pivot searching and swapping (PSS) algorithm, which executes permutations of carefully chosen columns of the parity-check matrix, after a local analysis of particular variable nodes called stopping set pivots. This algorithm can be in principle applied to any LDPC code. If the input parity-check matrix is designed for achieving good performance on the memory-less erasure channel, then the code obtained after the application of the PSS algorithm provides good joint correction of independent erasures and single erasure bursts. Numerical results are provided in order to show the effectiveness of the PSS algorithm when applied to different categories of LDPC codes.Comment: 15 pages, 4 figures. IEEE Trans. on Communications, accepted (submitted in Feb. 2007

    On parity check collections for iterative erasure decoding that correct all correctable erasure patterns of a given size

    Full text link
    Recently there has been interest in the construction of small parity check sets for iterative decoding of the Hamming code with the property that each uncorrectable (or stopping) set of size three is the support of a codeword and hence uncorrectable anyway. Here we reformulate and generalise the problem, and improve on this construction. First we show that a parity check collection that corrects all correctable erasure patterns of size m for the r-th order Hamming code (i.e, the Hamming code with codimension r) provides for all codes of codimension rr a corresponding ``generic'' parity check collection with this property. This leads naturally to a necessary and sufficient condition on such generic parity check collections. We use this condition to construct a generic parity check collection for codes of codimension r correcting all correctable erasure patterns of size at most m, for all r and m <= r, thus generalising the known construction for m=3. Then we discussoptimality of our construction and show that it can be improved for m>=3 and r large enough. Finally we discuss some directions for further research.Comment: 13 pages, no figures. Submitted to IEEE Transactions on Information Theory, July 28, 200

    Compact QC-LDPC Block and SC-LDPC Convolutional Codes for Low-Latency Communications

    Full text link
    Low decoding latency and complexity are two important requirements of channel codes used in many applications, like machine-to-machine communications. In this paper, we show how these requirements can be fulfilled by using some special quasi-cyclic low-density parity-check block codes and spatially coupled low-density parity-check convolutional codes that we denote as compact. They are defined by parity-check matrices designed according to a recent approach based on sequentially multiplied columns. This method allows obtaining codes with girth up to 12. Many numerical examples of practical codes are provided.Comment: 5 pages, 1 figure, presented at IEEE PIMRC 201

    Analysis of reaction and timing attacks against cryptosystems based on sparse parity-check codes

    Full text link
    In this paper we study reaction and timing attacks against cryptosystems based on sparse parity-check codes, which encompass low-density parity-check (LDPC) codes and moderate-density parity-check (MDPC) codes. We show that the feasibility of these attacks is not strictly associated to the quasi-cyclic (QC) structure of the code but is related to the intrinsically probabilistic decoding of any sparse parity-check code. So, these attacks not only work against QC codes, but can be generalized to broader classes of codes. We provide a novel algorithm that, in the case of a QC code, allows recovering a larger amount of information than that retrievable through existing attacks and we use this algorithm to characterize new side-channel information leakages. We devise a theoretical model for the decoder that describes and justifies our results. Numerical simulations are provided that confirm the effectiveness of our approach
    corecore