18 research outputs found

    Linear Programming Decoding of Spatially Coupled Codes

    Full text link
    For a given family of spatially coupled codes, we prove that the LP threshold on the BSC of the graph cover ensemble is the same as the LP threshold on the BSC of the derived spatially coupled ensemble. This result is in contrast with the fact that the BP threshold of the derived spatially coupled ensemble is believed to be larger than the BP threshold of the graph cover ensemble as noted by the work of Kudekar et al. (2011, 2012). To prove this, we establish some properties related to the dual witness for LP decoding which was introduced by Feldman et al. (2007) and simplified by Daskalakis et al. (2008). More precisely, we prove that the existence of a dual witness which was previously known to be sufficient for LP decoding success is also necessary and is equivalent to the existence of certain acyclic hyperflows. We also derive a sublinear (in the block length) upper bound on the weight of any edge in such hyperflows, both for regular LPDC codes and for spatially coupled codes and we prove that the bound is asymptotically tight for regular LDPC codes. Moreover, we show how to trade crossover probability for "LP excess" on all the variable nodes, for any binary linear code.Comment: 37 pages; Added tightness construction, expanded abstrac

    Impact of redundant checks on the LP decoding thresholds of LDPC codes

    Full text link
    Feldman et al.(2005) asked whether the performance of the LP decoder can be improved by adding redundant parity checks to tighten the LP relaxation. We prove that for LDPC codes, even if we include all redundant checks, asymptotically there is no gain in the LP decoder threshold on the BSC under certain conditions on the base Tanner graph. First, we show that if the graph has bounded check-degree and satisfies a condition which we call asymptotic strength, then including high degree redundant checks in the LP does not significantly improve the threshold in the following sense: for each constant delta>0, there is a constant k>0 such that the threshold of the LP decoder containing all redundant checks of degree at most k improves by at most delta upon adding to the LP all redundant checks of degree larger than k. We conclude that if the graph satisfies a rigidity condition, then including all redundant checks does not improve the threshold of the base LP. We call the graph asymptotically strong if the LP decoder corrects a constant fraction of errors even if the LLRs of the correct variables are arbitrarily small. By building on the work of Feldman et al.(2007) and Viderman(2013), we show that asymptotic strength follows from sufficiently large expansion. We also give a geometric interpretation of asymptotic strength in terms pseudocodewords. We call the graph rigid if the minimum weight of a sum of check nodes involving a cycle tends to infinity as the block length tends to infinity. Under the assumptions that the graph girth is logarithmic and the minimum check degree is at least 3, rigidity is equivalent to the nondegeneracy property that adding at least logarithmically many checks does not give a constant weight check. We argue that nondegeneracy is a typical property of random check-regular graphs

    Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaves 207-214).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.We study the minimum distance of binary error correcting codes from the following perspectives: * The problem of deriving bounds on the minimum distance of a code given constraints on the computational complexity of its encoder. * The minimum distance of linear codes that are symmetric in the sense of being invariant under the action of a group on the bits of the codewords. * The derandomization capabilities of probability measures on the Hamming cube based on binary linear codes with good distance properties, and their variations. Highlights of our results include: * A general theorem that asserts that if the encoder uses linear time and sub-linear memory in the general binary branching program model, then the minimum distance of the code cannot grow linearly with the block length when the rate is nonvanishing. * New upper bounds on the minimum distance of various types of Turbo-like codes. * The first ensemble of asymptotically good Turbo like codes. We prove that depth-three serially concatenated Turbo codes can be asymptotically good. * The first ensemble of asymptotically good codes that are ideals in the group algebra of a group. We argue that, for infinitely many block lengths, a random ideal in the group algebra of the dihedral group is an asymptotically good rate half code with a high probability. * An explicit rate-half code whose codewords are in one-to-one correspondence with special hyperelliptic curves over a finite field of prime order where the number of zeros of a codeword corresponds to the number of rational points.(cont.) * A sharp O(k-1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight. * An assertion saying that any sufficiently log-wise independent probability measure looks random to all polynomially small read-once DNF formulas. * An elaborate study of the problem of derandomizability of ACâ‚€ by any sufficiently polylog-wise independent probability measure. * An elaborate study of the problem of approximability of high-degree parity functions on binary linear codes by low-degree polynomials with coefficients in fields of odd characteristics.by Louay M.J. Bazzi.Ph.D

    Robust algorithms for model-based object recognition and localization

    No full text
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 86-87).We consider the problem of model-based object recognition and localization in the presence of noise, spurious features, and occlusion. We address the case where the model is allowed to be transformed by elements in a given space of allowable transformations. Known algorithms for the problem either treat noise very accurately in an unacceptable worst case running time, or may have unreliable output when noise is allowed. We introduce the idea of tolerance which measures the robustness of a recognition and localization method when noise is allowed. We present a collection of algorithms for the problem, each achieving a different degree of tolerance. The main result is a localization algorithm that achieves any desired tolerance in a relatively low order worst case asymptotic running time. The time constant of the algorithm depends on the ratio of the noise bound over the given tolerance bound. The solution we provide is general enough to handle different cases of allowable transformations, such as planar affine transformations, and scaled rigid motions in arbitrary dimensions.by Louay Mohamad Jamil Bazzi.S.M

    The Solution of Linear Probabilistic Recurrence Relations 1

    No full text
    Abstract. Linear probabilistic divide-and-conquer recurrence relations arise when analyzing the running time of divide-and-conquer randomized algorithms. We consider first the problem of finding the expected value of the random process T (x), described as the output of a randomized recursive algorithm T. On input x, T generates a sample (h1,...,hk) from a given probability distribution on [0, 1] k and recurses by returning g(x) + �k i=1ci T (hi x) until a constant is returned when x becomes less than a given number. Under some minor assumptions on the problem parameters, we present a closed-form asymptotic solution of the expected value of T (x). We show that E[T (x)] = �(x p + x p � x 1 (g(u)/u p+1) du), where p is the nonnegative unique solution of the equation �k i=1ci E[h p i] = 1. This generalizes the result in [1] where we considered the deterministic version of the recurrence. Then, following [2], we argue that the solution holds under a broad class of perturbations including floors and ceilings that usually accompany the recurrences that arise when analyzing randomized divide-and-conquer algorithms. Key Words. Randomized algorithms, Divide-and-conquer algorithms, Recurrence relations. 1. Introduction. Linea
    corecore