13 research outputs found

    A Universal Theory of Pseudocodewords

    Get PDF
    Three types of pseudocodewords for LDPC codes are found in the literature: graph cover pseudocodewords, linear programming pseudocodewords, and computation tree pseudocodewords. In this paper we first review these three notions and known connections between them. We then propose a new decoding rule — universal cover decoding — for LDPC codes. This new decoding rule also has a notion of pseudocodeword attached, and this fourth notion provides a framework in which we can better understand the other three

    Applications of Linear Programming to Coding Theory

    Get PDF
    Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura\u27s decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity

    Session 12: Applications - \u3cem\u3ePredicting the Origins of Artwork Found in Rural Churches\u3c/em\u3e

    No full text
    A few years ago, I was contacted by Rodney Oppegard, a church historian. He had spent many years collecting information on ecclesiastical furnishings and artwork found in the Lutheran churches of rural North Dakota, and while his data set was extensive it was by no means complete. Some artwork was unsigned or the signature obscured, other pieces had been transferred to different churches, and in some cases the church itself had been destroyed by fire years before, leaving only incomplete records and fading memories as clues to the original church’s configuration. Mr. Oppegard wanted to know whether there was a mathematical way to use existing data to “fill in the holes” of his data set. In this talk, I will outline how geospatial and rudimentary archival data were used to construct and evaluate models for determining which of several popular artists was responsible for a particular church’s altar painting

    A Generalization of Omura's Decoding Algorithm and a Proof of Convergence

    No full text

    Towards Universal Cover Decoding

    Get PDF
    Low complexity decoding of low-density paritycheck (LDPC) codes may be obtained from the application of iterative message-passing decoding algorithms to the bipartite Tanner graph of the code. Arguably, the two most important decoding algorithms for LDPC codes are the sum-product decoder and the min-sum (MS) decoder. On a bipartite graph without cycles (a tree), the sum-product decoder minimizes the probability of bit error, while the min-sum decoder minimizes the probability of word error [9]. While the behavior of sum-product and min-sum is easily understood when operating on trees, their behavior becomes much more difficult to characterize when the Tanner graph has cycles. Wiberg [9] showed that decoding can be modeled by finding minimal cost configurations on computation trees that are formed at successive iterations of sum-product/min-sum, and returning the value assigned to the root nodes of these trees. Additionally, he proved that for an error to occur at a particular variable node, there must exist a deviation of non-positive cost on the computation tree rooted at this node. In this paper, we are interested in analyzing the non-codeword errors that occur during parallel, iterative decoding with the min-sum decoder. Recently, work has been done relating the min-sum decoder to the linear programming (LP) decoder via graph covers [8]. The LP decoder, as defined by Feldman [3], recasts the problem of decoding as an optimization problem whose feasible set is a polytope defined by the parity-check matrix of a code. In [8], it is shown that LP decoding can be realized as a decoder operating on graph covers. The notion that non-codeword outputs of LP decoding are related to non-codeword outputs of min-sum decoding is attractive from an analytical perspective. However, the performance of LP and min-sum are not consistently related [2]. Therefore, a different theoretical model is needed to explore the relationship between decoding on graph covers and decoding on computation trees. To bridge this gap, we will turn to the notion of decoding on the universal cover. Universal covers can be thought of as both infinite computation trees and infinite graph covers. For this reason, decoding on universal covers provides an intuitive link between LP decoding and min-sum decoding of LDPC codes. This paper is an extension of previous work done by the authors in [2]; thus, much of the requisite background material is drawn from [2]. Section 2 introduces the definition of universal covers. Properties related to configurations on universal covers and their corresponding costs are established in Section 3. Finally, in Section 4 a preliminary definition of the universal cover decoder is given, and it is shown that under certain conditions the universal cover decoder agrees with the LP decoder

    A Universal Theory of Pseudocodewords

    Get PDF
    Three types of pseudocodewords for LDPC codes are found in the literature: graph cover pseudocodewords, linear programming pseudocodewords, and computation tree pseudocodewords. In this paper we first review these three notions and known connections between them. We then propose a new decoding rule — universal cover decoding — for LDPC codes. This new decoding rule also has a notion of pseudocodeword attached, and this fourth notion provides a framework in which we can better understand the other three

    A Universal Theory of Decoding and Pseudocodewords

    Get PDF
    The discovery of turbo codes [5] and the subsequent rediscovery of low-density parity-check (LDPC) codes [9, 18] represent a major milestone in the field of coding theory. These two classes of codes can achieve realistic bit error rates, between 10−5 and 10−12, with signalto- noise ratios that are only slightly above the minimum possible for a given channel and code rate established by Shannon’s original capacity theorems. In this sense, these codes are said to be near-capacity-achieving codes and are sometimes considered to have solved (in the engineering sense, at least) the coding problem for the additive white Gaussian noise (AWGN) channel and its derivative channels. Perhaps the most important commonality between turbo and low-density parity-check codes is that they both utilize iterative message-passing decoding algorithms. For turbo codes, one uses the so-called turbo decoding algorithm, and for LDPC codes, both the sumproduct (SP) and the min-sum (MS) algorithms are used. The success of the various iterative message-passing algorithms is sometimes said to have ushered in a new era of “modern” coding theory in which the design emphasis has shifted from optimizing some code property, such as minimum distance, to optimizing the corresponding decoding structure of the code, such as the degree profile [24, 25], with respect to the behavior of a message-passing decoder. As successful as these codes and decoders have been in terms of application, there are several major questions that must be answered before a complete understanding of them can be achieved. The theoretical research in the area of capacity-achieving codes is focused on two main themes. The first theme is whether different types of capacity-achieving codes have common encoder and structural properties. In [17], it was claimed that turbo codes could be viewed as LDPC codes, but the relationship was not made explicit. More recently, P®erez, his student Jiang, and others [11, 16] developed a construction for the parity-check matrices of arbitrary turbo codes that clearly connects the components of the turbo encoder to the resulting structure of the parity-check matrix. From a more abstract perspective, turbo codes and low-density parity-check codes are examples of codes with long block lengths that exhibit the random structure inherent in Shannon’s original theorems. The second and more active research theme is the determination of the behavior of iterative message-passing decoding and the relationships between the various decoding algorithms. The dominant problem in this area is to understand the non-codeword decoder errors that occur in computer simulations of LDPC codes with iterative message-passing decoders. Motivated by empirical observations of the non-codeword outputs of LDPC decoders, the notion of stopping sets was first introduced by Forney, et al. [8] in 2001. Two years later, a formal definition of stopping sets was given by Changyan, et al. [6]. They demonstrated that the bit and block error probabilities of iteratively decoded LDPC codes on the binary erasure channel (BEC) can be determined exactly from the stopping sets of the parity-check matrix. (Here, a stopping set S is a subset of the set of variable nodes such that all neighboring check nodes of S are connected to S at least twice.
    corecore