18,598 research outputs found

    A Decoding Algorithm for LDPC Codes Over Erasure Channels with Sporadic Errors

    Get PDF
    none4An efficient decoding algorithm for low-density parity-check (LDPC) codes on erasure channels with sporadic errors (i.e., binary error-and-erasure channels with error probability much smaller than the erasure probability) is proposed and its performance analyzed. A general single-error multiple-erasure (SEME) decoding algorithm is first described, which may be in principle used with any binary linear block code. The algorithm is optimum whenever the non-erased part of the received word is affected by at most one error, and is capable of performing error detection of multiple errors. An upper bound on the average block error probability under SEME decoding is derived for the linear random code ensemble. The bound is tight and easy to implement. The algorithm is then adapted to LDPC codes, resulting in a simple modification to a previously proposed efficient maximum likelihood LDPC erasure decoder which exploits the parity-check matrix sparseness. Numerical results reveal that LDPC codes under efficient SEME decoding can closely approach the average performance of random codes.noneG. Liva; E. Paolini; B. Matuz; M. ChianiG. Liva; E. Paolini; B. Matuz; M. Chian

    From Polar to Reed-Muller Codes: a Technique to Improve the Finite-Length Performance

    Full text link
    We explore the relationship between polar and RM codes and we describe a coding scheme which improves upon the performance of the standard polar code at practical block lengths. Our starting point is the experimental observation that RM codes have a smaller error probability than polar codes under MAP decoding. This motivates us to introduce a family of codes that "interpolates" between RM and polar codes, call this family Cinter={Cα:α[0,1]}{\mathcal C}_{\rm inter} = \{C_{\alpha} : \alpha \in [0, 1]\}, where Cαα=1C_{\alpha} \big |_{\alpha = 1} is the original polar code, and Cαα=0C_{\alpha} \big |_{\alpha = 0} is an RM code. Based on numerical observations, we remark that the error probability under MAP decoding is an increasing function of α\alpha. MAP decoding has in general exponential complexity, but empirically the performance of polar codes at finite block lengths is boosted by moving along the family Cinter{\mathcal C}_{\rm inter} even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder. We demonstrate the performance gain via numerical simulations for transmission over the erasure channel as well as the Gaussian channel.Comment: 8 pages, 7 figures, in IEEE Transactions on Communications, 2014 and in ISIT'1

    Minimum-Variance Importance-Sampling Bernoulli Estimator for Fast Simulation of Linear Block Codes over Binary Symmetric Channels

    Full text link
    In this paper the choice of the Bernoulli distribution as biased distribution for importance sampling (IS) Monte-Carlo (MC) simulation of linear block codes over binary symmetric channels (BSCs) is studied. Based on the analytical derivation of the optimal IS Bernoulli distribution, with explicit calculation of the variance of the corresponding IS estimator, two novel algorithms for fast-simulation of linear block codes are proposed. For sufficiently high signal-to-noise ratios (SNRs) one of the proposed algorithm is SNR-invariant, i.e. the IS estimator does not depend on the cross-over probability of the channel. Also, the proposed algorithms are shown to be suitable for the estimation of the error-correcting capability of the code and the decoder. Finally, the effectiveness of the algorithms is confirmed through simulation results in comparison to standard Monte Carlo method

    Coding with Encoding Uncertainty

    Full text link
    We study the channel coding problem when errors and uncertainty occur in the encoding process. For simplicity we assume the channel between the encoder and the decoder is perfect. Focusing on linear block codes, we model the encoding uncertainty as erasures on the edges in the factor graph of the encoder generator matrix. We first take a worst-case approach and find the maximum tolerable number of erasures for perfect error correction. Next, we take a probabilistic approach and derive a sufficient condition on the rate of a set of codes, such that decoding error probability vanishes as blocklength tends to infinity. In both scenarios, due to the inherent asymmetry of the problem, we derive the results from first principles, which indicates that robustness to encoding errors requires new properties of codes different from classical properties.Comment: 12 pages; a shorter version of this work will appear in the proceedings of ISIT 201

    The Error Probability of Sparse Superposition Codes with Approximate Message Passing Decoding

    Get PDF
    Sparse superposition codes, or sparse regression codes (SPARCs), are a recent class of codes for reliable communication over the AWGN channel at rates approaching the channel capacity. Approximate message passing (AMP) decoding, a computationally efficient technique for decoding SPARCs, has been proven to be asymptotically capacity-achieving for the AWGN channel. In this paper, we refine the asymptotic result by deriving a large deviations bound on the probability of AMP decoding error. This bound gives insight into the error performance of the AMP decoder for large but finite problem sizes, giving an error exponent as well as guidance on how the code parameters should be chosen at finite block lengths. For an appropriate choice of code parameters, we show that for any fixed rate less than the channel capacity, the decoding error probability decays exponentially in n/(logn)2Tn/(\log n)^{2T}, where TT, the number of AMP iterations required for successful decoding, is bounded in terms of the gap from capacity
    corecore