133 research outputs found

    Linear complexity universal decoding with exponential error probability decay

    Get PDF
    In this manuscript we consider linear complexity binary linear block encoders and decoders that operate universally with exponential error probability decay. Such scenarios may be relevant in wireless scenarios where probability distributions may not be fully characterized due to the dynamic nature of wireless environments. More specifically, we consider the setting of fixed length-to-fixed length near-lossless data compression of a memoryless binary source of unknown probability distribution as well as the dual setting of communicating on a binary symmetric channel (BSC) with unknown crossover probability. We introduce a new 'min-max distance' metric, analogous to minimum distance, that addresses the universal binary setting and has the same properties as that of minimum distance on BSCs with known crossover probability. The code construction and decoding algorithm are universal extensions of the 'expander codes' framework of Barg and Zemor and have identical complexity and exponential error probability performance

    Towards practical minimum-entropy universal decoding

    Get PDF
    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance

    Improved Nearly-MDS Expander Codes

    Full text link
    A construction of expander codes is presented with the following three properties: (i) the codes lie close to the Singleton bound, (ii) they can be encoded in time complexity that is linear in their code length, and (iii) they have a linear-time bounded-distance decoder. By using a version of the decoder that corrects also erasures, the codes can replace MDS outer codes in concatenated constructions, thus resulting in linear-time encodable and decodable codes that approach the Zyablov bound or the capacity of memoryless channels. The presented construction improves on an earlier result by Guruswami and Indyk in that any rate and relative minimum distance that lies below the Singleton bound is attainable for a significantly smaller alphabet size.Comment: Part of this work was presented at the 2004 IEEE Int'l Symposium on Information Theory (ISIT'2004), Chicago, Illinois (June 2004). This work was submitted to IEEE Transactions on Information Theory on January 21, 2005. To appear in IEEE Transactions on Information Theory, August 2006. 12 page

    GROTESQUE: Noisy Group Testing (Quick and Efficient)

    Full text link
    Group-testing refers to the problem of identifying (with high probability) a (small) subset of DD defectives from a (large) set of NN items via a "small" number of "pooled" tests. For ease of presentation in this work we focus on the regime when D = \cO{N^{1-\gap}} for some \gap > 0. The tests may be noiseless or noisy, and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of literature demonstrates that Θ(Dlog(N))\Theta(D\log(N)) tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexity that is sub-linear in NN have started being investigated (recent work by \cite{GurI:04,IndN:10, NgoP:11} gave some of the first such algorithms). In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity (\cO{D\log(N)} in both the performance metrics). The total number of stages of our adaptive algorithm is "small" (\cO{\log(D)}). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires \cO{D\log(D)\log(N)} tests and has a decoding complexity of {O(D(logN+log2D)){\cal O}(D(\log N+\log^{2}D))}. Finally, we present an adaptive algorithm that only requires 2 stages, and for which both the number of tests and the decoding complexity scale as {O(D(logN+log2D)){\cal O}(D(\log N+\log^{2}D))}. For all three settings the probability of error of our algorithms scales as \cO{1/(poly(D)}.Comment: 26 pages, 5 figure

    Correcting a Fraction of Errors in Nonbinary Expander Codes with Linear Programming

    Full text link
    A linear-programming decoder for \emph{nonbinary} expander codes is presented. It is shown that the proposed decoder has the maximum-likelihood certificate properties. It is also shown that this decoder corrects any pattern of errors of a relative weight up to approximately 1/4 \delta_A \delta_B (where \delta_A and \delta_B are the relative minimum distances of the constituent codes).Comment: Part of this work was presented at the IEEE International Symposium on Information Theory 2009, Seoul, Kore
    corecore