44 research outputs found

    Pseudocodewords of linear programming decoding of 3-dimensional turbo codes

    Get PDF
    In this work, we consider pseudocodewords of (relaxed) linear programming (LP) decoding of 3-dimensional turbo codes (3D-TCs), recently introduced by Berrou et al.. Here, we consider binary 3D-TCs while the original work of Berrou et al. considered double-binary codes. We present a relaxed LP decoder for 3D-TCs, which is an adaptation of the relaxed LP decoder for conventional turbo codes proposed by Feldman in his thesis. The vertices of this relaxed polytope are the pseudocodewords. We show that the support set of any pseudocodeword is a stopping set of iterative decoding of 3D-TCs using maximum a posteriori constituent decoders on the binary erasure channel. Furthermore, we present a numerical study of small block length 3D-TCs, which shows that typically the minimum pseudoweight (on the additive white Gaussian noise (AWGN) channel) is smaller than both the minimum distance and the stopping distance. In particular, we performed an exhaustive search over all interleaver pairs in the 3D-TC (with input block length K = 128) based on quadratic permutation polynomials over integer rings with a quadratic inverse. The search shows that the best minimum AWGN pseudoweight is strictly smaller than the best minimum/stopping distance

    Minimum Pseudoweight Analysis of 3-Dimensional Turbo Codes

    Full text link
    In this work, we consider pseudocodewords of (relaxed) linear programming (LP) decoding of 3-dimensional turbo codes (3D-TCs). We present a relaxed LP decoder for 3D-TCs, adapting the relaxed LP decoder for conventional turbo codes proposed by Feldman in his thesis. We show that the 3D-TC polytope is proper and CC-symmetric, and make a connection to finite graph covers of the 3D-TC factor graph. This connection is used to show that the support set of any pseudocodeword is a stopping set of iterative decoding of 3D-TCs using maximum a posteriori constituent decoders on the binary erasure channel. Furthermore, we compute ensemble-average pseudoweight enumerators of 3D-TCs and perform a finite-length minimum pseudoweight analysis for small cover degrees. Also, an explicit description of the fundamental cone of the 3D-TC polytope is given. Finally, we present an extensive numerical study of small-to-medium block length 3D-TCs, which shows that 1) typically (i.e., in most cases) when the minimum distance dmind_{\rm min} and/or the stopping distance hminh_{\rm min} is high, the minimum pseudoweight (on the additive white Gaussian noise channel) is strictly smaller than both the dmind_{\rm min} and the hminh_{\rm min}, and 2) the minimum pseudoweight grows with the block length, at least for small-to-medium block lengths.Comment: To appear in IEEE Transactions on Communication

    Low-Complexity Approaches to Slepian–Wolf Near-Lossless Distributed Data Compression

    Get PDF
    This paper discusses the Slepian–Wolf problem of distributed near-lossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “source-splitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndrome-formers and consider a linear programming relaxation to maximum-likelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “min-sum” iterative decoding algorithm is applied. This relaxation exhibits the ML-certificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”-style low-density parity check codes (LDPCs) as syndrome-formers admits a positive error exponent and therefore provably good performance

    Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms

    Full text link
    Mathematical programming is a branch of applied mathematics and has recently been used to derive new decoding approaches, challenging established but often heuristic algorithms based on iterative message passing. Concepts from mathematical programming used in the context of decoding include linear, integer, and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory. This survey article reviews and categorizes decoding methods based on mathematical programming approaches for binary linear codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory. Published July 201

    Pseudocodewords of Parity-Check Codes

    Get PDF
    The success of modern algorithms for the decoding problem such as message-passing iterative decoding and linear programming decoding lies in their local nature. This feature allows the algorithms to be extremely fast and capable of correcting more errors than guaranteed by the classical minimum distance of the code. Nonetheless, the performance of these decoders depends crucially on the Tanner graph representation of the code. In order to understand this choice of representation, we need to analyze the pseudocodewords of the Tanner graph of a code. These pseudocodewords are outputs of local decoding algorithms which may not be legitimate codewords. In this dissertation, we introduce a lifted fundamental cone and show that there is a one-to-one correspondence between graph cover pseudocodewords of a binary code and integer points in the lifted fundamental cone. We use this fact to prove the rationality of the generating function of the pseudocodewords for a general binary parity-check code. Our approach also yields algorithms for producing this generating function and provides tools for studying the irreducible pseudocodewords. Understanding irreducible pseudocodewords is crucial to determining the best representation of a code. Moreover, combining these techniques with the recent characterization of fundamental cone over F_3, we can analyze ternary parity-check codes. Finally, we make progress in the study of more general nonbinary codes by determining constraints satisfied by all pseudocodewords of a code over F_p where p is prime

    On Pseudocodewords and Improved Union Bound of Linear Programming Decoding of HDPC Codes

    Full text link
    In this paper, we present an improved union bound on the Linear Programming (LP) decoding performance of the binary linear codes transmitted over an additive white Gaussian noise channels. The bounding technique is based on the second-order of Bonferroni-type inequality in probability theory, and it is minimized by Prim's minimum spanning tree algorithm. The bound calculation needs the fundamental cone generators of a given parity-check matrix rather than only their weight spectrum, but involves relatively low computational complexity. It is targeted to high-density parity-check codes, where the number of their generators is extremely large and these generators are spread densely in the Euclidean space. We explore the generator density and make a comparison between different parity-check matrix representations. That density effects on the improvement of the proposed bound over the conventional LP union bound. The paper also presents a complete pseudo-weight distribution of the fundamental cone generators for the BCH[31,21,5] code

    Introduction to Mathematical Programming-Based Error-Correction Decoding

    Full text link
    Decoding error-correctiong codes by methods of mathematical optimization, most importantly linear programming, has become an important alternative approach to both algebraic and iterative decoding methods since its introduction by Feldman et al. At first celebrated mainly for its analytical powers, real-world applications of LP decoding are now within reach thanks to most recent research. This document gives an elaborate introduction into both mathematical optimization and coding theory as well as a review of the contributions by which these two areas have found common ground.Comment: LaTeX sources maintained here: https://github.com/supermihi/lpdintr
    corecore