832 research outputs found

    Finite Length Analysis of LDPC Codes

    Full text link
    In this paper, we study the performance of finite-length LDPC codes in the waterfall region. We propose an algorithm to predict the error performance of finite-length LDPC codes over various binary memoryless channels. Through numerical results, we find that our technique gives better performance prediction compared to existing techniques.Comment: Submitted to WCNC 201

    Scattered EXIT Charts for Finite Length LDPC Code Design

    Full text link
    We introduce the Scattered Extrinsic Information Transfer (S-EXIT) chart as a tool for optimizing degree profiles of short length Low-Density Parity-Check (LDPC) codes under iterative decoding. As degree profile optimization is typically done in the asymptotic length regime, there is space for further improvement when considering the finite length behavior. We propose to consider the average extrinsic information as a random variable, exploiting its specific distribution properties for guiding code design. We explain, step-by-step, how to generate an S-EXIT chart for short-length LDPC codes. We show that this approach achieves gains in terms of bit error rate (BER) of 0.5 dB and 0.6 dB over the additive white Gaussian noise (AWGN) channel for codeword lengths of 128 and 180 bits, respectively, at a target BER of 10βˆ’410^{-4} when compared to conventional Extrinsic Information Transfer (EXIT) chart-based optimization. Also, a performance gain for the Binary Erasure Channel (BEC) for a block (i.e., codeword) length of 180 bits is shown.Comment: in IEEE International Conference on Communications (ICC), May 201

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor

    Polytope of Correct (Linear Programming) Decoding and Low-Weight Pseudo-Codewords

    Full text link
    We analyze Linear Programming (LP) decoding of graphical binary codes operating over soft-output, symmetric and log-concave channels. We show that the error-surface, separating domain of the correct decoding from domain of the erroneous decoding, is a polytope. We formulate the problem of finding the lowest-weight pseudo-codeword as a non-convex optimization (maximization of a convex function) over a polytope, with the cost function defined by the channel and the polytope defined by the structure of the code. This formulation suggests new provably convergent heuristics for finding the lowest weight pseudo-codewords improving in quality upon previously discussed. The algorithm performance is tested on the example of the Tanner [155, 64, 20] code over the Additive White Gaussian Noise (AWGN) channel.Comment: 6 pages, 2 figures, accepted for IEEE ISIT 201
    • …
    corecore