222 research outputs found

    Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels

    Get PDF
    A stochastic code is a pair of encoding and decoding procedures where Encoding procedure receives a k bit message m, and a d bit uniform string S. The code is (p,L)-list-decodable against a class C of "channel functions" from n bits to n bits, if for every message m and every channel C in C that induces at most pnpn errors, applying decoding on the "received word" C(Enc(m,S)) produces a list of at most L messages that contain m with high probability (over the choice of uniform S). Note that both the channel C and the decoding algorithm Dec do not receive the random variable S. The rate of a code is the ratio between the message length and the encoding length, and a code is explicit if Enc, Dec run in time poly(n). Guruswami and Smith (J. ACM, to appear), showed that for every constants 0 1 there are Monte-Carlo explicit constructions of stochastic codes with rate R >= 1-H(p)-epsilon that are (p,L=poly(1/epsilon))-list decodable for size n^c channels. Monte-Carlo, means that the encoding and decoding need to share a public uniformly chosen poly(n^c) bit string Y, and the constructed stochastic code is (p,L)-list decodable with high probability over the choice of Y. Guruswami and Smith pose an open problem to give fully explicit (that is not Monte-Carlo) explicit codes with the same parameters, under hardness assumptions. In this paper we resolve this open problem, using a minimal assumption: the existence of poly-time computable pseudorandom generators for small circuits, which follows from standard complexity assumptions by Impagliazzo and Wigderson (STOC 97). Guruswami and Smith also asked to give a fully explicit unconditional constructions with the same parameters against O(log n)-space online channels. (These are channels that have space O(log n) and are allowed to read the input codeword in one pass). We resolve this open problem. Finally, we consider a tighter notion of explicitness, in which the running time of encoding and list-decoding algorithms does not increase, when increasing the complexity of the channel. We give explicit constructions (with rate approaching 1-H(p) for every p 0) for channels that are circuits of size 2^{n^{Omega(1/d)}} and depth d. Here, the running time of encoding and decoding is a fixed polynomial (that does not depend on d). Our approach builds on the machinery developed by Guruswami and Smith, replacing some probabilistic arguments with explicit constructions. We also present a simplified and general approach that makes the reductions in the proof more efficient, so that we can handle weak classes of channels

    Reed-Muller codes for random erasures and errors

    Full text link
    This paper studies the parameters for which Reed-Muller (RM) codes over GF(2)GF(2) can correct random erasures and random errors with high probability, and in particular when can they achieve capacity for these two classical channels. Necessarily, the paper also studies properties of evaluations of multi-variate GF(2)GF(2) polynomials on random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about square root of the number of errors at capacity. The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m,r)E(m,r), the matrix whose rows are truth tables of all monomials of degree r\leq r in mm variables. What is the most (resp. least) number of random columns in E(m,r)E(m,r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees rr, which we use to show that RM codes achieve capacity for erasures in these regimes. Our decoding from random errors follows from the following novel reduction. For every linear code CC of sufficiently high rate we construct a new code CC', also of very high rate, such that for every subset SS of coordinates, if CC can recover from erasures in SS, then CC' can recover from errors in SS. Specializing this to RM codes and using our results for erasures imply our result on unique decoding of RM codes at high rate. Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent \cite{KLP} bounds from constant degree to linear degree polynomials

    Transparent Error Correcting in a Computationally Bounded World

    Get PDF
    We construct uniquely decodable codes against channels which are computationally bounded. Our construction requires only a public-coin (transparent) setup. All prior work for such channels either required a setup with secret keys and states, could not achieve unique decoding, or got worse rates (for a given bound on codeword corruptions). On the other hand, our construction relies on a strong cryptographic hash function with security properties that we only instantiate in the random oracle model

    List Decoding Random Euclidean Codes and Infinite Constellations

    Get PDF
    We study the list decodability of different ensembles of codes over the real alphabet under the assumption of an omniscient adversary. It is a well-known result that when the source and the adversary have power constraints P P and N N respectively, the list decoding capacity is equal to 12logPN \frac{1}{2}\log\frac{P}{N} . Random spherical codes achieve constant list sizes, and the goal of the present paper is to obtain a better understanding of the smallest achievable list size as a function of the gap to capacity. We show a reduction from arbitrary codes to spherical codes, and derive a lower bound on the list size of typical random spherical codes. We also give an upper bound on the list size achievable using nested Construction-A lattices and infinite Construction-A lattices. We then define and study a class of infinite constellations that generalize Construction-A lattices and prove upper and lower bounds for the same. Other goodness properties such as packing goodness and AWGN goodness of infinite constellations are proved along the way. Finally, we consider random lattices sampled from the Haar distribution and show that if a certain number-theoretic conjecture is true, then the list size grows as a polynomial function of the gap-to-capacity

    Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound

    Full text link
    We introduce synchronization strings as a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than commonly considered half-errors, i.e., symbol corruptions and erasures. For every ϵ>0\epsilon >0, synchronization strings allow to index a sequence with an ϵO(1)\epsilon^{-O(1)} size alphabet such that one can efficiently transform kk synchronization errors into (1+ϵ)k(1+\epsilon)k half-errors. This powerful new technique has many applications. In this paper, we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels. While ECCs for both half-errors and synchronization errors have been intensely studied, the later has largely resisted progress. Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed by Schulman and Zuckerman. Insdel codes for asymptotically large or small noise rates were given in 2016 by Guruswami et al. but these codes are still polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A direct application of our synchronization strings based indexing method gives a simple black-box construction which transforms any ECC into an equally efficient insdel code with a slightly larger alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, we obtain efficient insdel codes which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound for the complete noise spectrum

    The Capacity of Online (Causal) qq-ary Error-Erasure Channels

    Full text link
    In the qq-ary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x1,,xn){0,1,,q1}n\mathbf{x} =(x_1,\ldots,x_n) \in \{0,1,\ldots,q-1\}^n symbol by symbol via a channel limited to at most pnpn errors and/or pnp^{*} n erasures. The channel is "online" in the sense that at the iith step of communication the channel decides whether to corrupt the iith symbol or not based on its view so far, i.e., its decision depends only on the transmitted symbols (x1,,xi)(x_1,\ldots,x_i). This is in contrast to the classical adversarial channel in which the corruption is chosen by a channel that has a full knowledge on the sent codeword x\mathbf{x}. In this work we study the capacity of qq-ary online channels for a combined corruption model, in which the channel may impose at most pnpn {\em errors} and at most pnp^{*} n {\em erasures} on the transmitted codeword. The online channel (in both the error and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we give a full characterization of the capacity as a function of q,pq,p, and pp^{*}.Comment: This is a new version of the binary case, which can be found at arXiv:1412.637

    Error Correcting Codes for Uncompressed Messages

    Get PDF
    Most types of messages we transmit (e.g., video, audio, images, text) are not fully compressed, since they do not have known efficient and information theoretically optimal compression algorithms. When transmitting such messages, standard error correcting codes fail to take advantage of the fact that messages are not fully compressed. We show that in this setting, it is sub-optimal to use standard error correction. We consider a model where there is a set of "valid messages" which the sender may send that may not be efficiently compressible, but where it is possible for the receiver to recognize valid messages. In this model, we construct a (probabilistic) encoding procedure that achieves better tradeoffs between data rates and error-resilience (compared to just applying a standard error correcting code). Additionally, our techniques yield improved efficiently decodable (probabilistic) codes for fully compressed messages (the standard setting where the set of valid messages is all binary strings) in the high-rate regime
    corecore