4,771 research outputs found

    Upper bound on list-decoding radius of binary codes

    Full text link
    Consider the problem of packing Hamming balls of a given relative radius subject to the constraint that they cover any point of the ambient Hamming space with multiplicity at most LL. For odd Lβ‰₯3L\ge 3 an asymptotic upper bound on the rate of any such packing is proven. Resulting bound improves the best known bound (due to Blinovsky'1986) for rates below a certain threshold. Method is a superposition of the linear-programming idea of Ashikhmin, Barg and Litsyn (that was used previously to improve the estimates of Blinovsky for L=2L=2) and a Ramsey-theoretic technique of Blinovsky. As an application it is shown that for all odd LL the slope of the rate-radius tradeoff is zero at zero rate.Comment: IEEE Trans. Inform. Theory, accepte

    Combinatorial limitations of average-radius list-decoding

    Full text link
    We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of O(1/Ξ³)O(1/\gamma)) and lower bound (of Ξ©p(log⁑(1/Ξ³))\Omega_p(\log (1/\gamma))) for the list-size needed to decode up to radius pp with rate Ξ³\gamma away from capacity, i.e., 1-\h(p)-\gamma (here p∈(0,1/2)p\in (0,1/2) and Ξ³>0\gamma > 0). Our main result is the following: We prove that in any binary code CβŠ†{0,1}nC \subseteq \{0,1\}^n of rate 1-\h(p)-\gamma, there must exist a set LβŠ‚C\mathcal{L} \subset C of Ξ©p(1/Ξ³)\Omega_p(1/\sqrt{\gamma}) codewords such that the average distance of the points in L\mathcal{L} from their centroid is at most pnpn. In other words, there must exist Ξ©p(1/Ξ³)\Omega_p(1/\sqrt{\gamma}) codewords with low "average radius." The standard notion of list-decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average-radius form is in itself quite natural and is implied by the classical Johnson bound. The remaining results concern the standard notion of list-decoding, and help clarify the combinatorial landscape of list-decoding: 1. We give a short simple proof, over all fixed alphabets, of the above-mentioned Ξ©p(log⁑(Ξ³))\Omega_p(\log (\gamma)) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky. 2. We show that one {\em cannot} improve the Ξ©p(log⁑(1/Ξ³))\Omega_p(\log (1/\gamma)) lower bound via techniques based on identifying the zero-rate regime for list decoding of constant-weight codes. 3. We show a "reverse connection" showing that constant-weight codes for list decoding imply general codes for list decoding with higher rate. 4. We give simple second moment based proofs of tight (up to constant factors) lower bounds on the list-size needed for list decoding random codes and random linear codes from errors as well as erasures.Comment: 28 pages. Extended abstract in RANDOM 201

    List Decoding Tensor Products and Interleaved Codes

    Full text link
    We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. We show that for {\em every} code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rather than squaring, as one might expect). This gives the first efficient list decoders and new combinatorial bounds for some natural codes including multivariate polynomials where the degree in each variable is bounded. We show that for {\em every} code, its list decoding radius remains unchanged under mm-wise interleaving for an integer mm. This generalizes a recent result of Dinur et al \cite{DGKS}, who proved such a result for interleaved Hadamard codes (equivalently, linear transformations). Using the notion of generalized Hamming weights, we give better list size bounds for {\em both} tensoring and interleaving of binary linear codes. By analyzing the weight distribution of these codes, we reduce the task of bounding the list size to bounding the number of close-by low-rank codewords. For decoding linear transformations, using rank-reduction together with other ideas, we obtain list size bounds that are tight over small fields.Comment: 32 page

    A Combinatorial Bound on the List Size

    Get PDF
    In this paper we study the scenario in which a server sends dynamic data over a single broadcast channel to a number of passive clients. We consider the data to consist of discrete packets, where each update is sent in a separate packet. On demand, each client listens to the channel in order to obtain the most recent data packet. Such scenarios arise in many practical applications such as the distribution of weather and traffic updates to wireless mobile devices and broadcasting stock price information over the Internet. To satisfy a request, a client must listen to at least one packet from beginning to end. We thus consider the design of a broadcast schedule which minimizes the time that passes between a clients request and the time that it hears a new data packet, i.e., the waiting time of the client. Previous studies have addressed this objective, assuming that client requests are distributed uniformly over time. However, in the general setting, the clients behavior is difficult to predict and might not be known to the server. In this work we consider the design of universal schedules that guarantee a short waiting time for any possible client behavior. We define the model of dynamic broadcasting in the universal setting, and prove various results regarding the waiting time achievable in this framework

    Two Theorems in List Decoding

    Full text link
    We prove the following results concerning the list decoding of error-correcting codes: (i) We show that for \textit{any} code with a relative distance of δ\delta (over a large enough alphabet), the following result holds for \textit{random errors}: With high probability, for a \rho\le \delta -\eps fraction of random errors (for any \eps>0), the received word will have only the transmitted codeword in a Hamming ball of radius ρ\rho around it. Thus, for random errors, one can correct twice the number of errors uniquely correctable from worst-case errors for any code. A variant of our result also gives a simple algorithm to decode Reed-Solomon codes from random errors that, to the best of our knowledge, runs faster than known algorithms for certain ranges of parameters. (ii) We show that concatenated codes can achieve the list decoding capacity for erasures. A similar result for worst-case errors was proven by Guruswami and Rudra (SODA 08), although their result does not directly imply our result. Our results show that a subset of the random ensemble of codes considered by Guruswami and Rudra also achieve the list decoding capacity for erasures. Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure

    The Capacity of Online (Causal) qq-ary Error-Erasure Channels

    Full text link
    In the qq-ary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x1,…,xn)∈{0,1,…,qβˆ’1}n\mathbf{x} =(x_1,\ldots,x_n) \in \{0,1,\ldots,q-1\}^n symbol by symbol via a channel limited to at most pnpn errors and/or pβˆ—np^{*} n erasures. The channel is "online" in the sense that at the iith step of communication the channel decides whether to corrupt the iith symbol or not based on its view so far, i.e., its decision depends only on the transmitted symbols (x1,…,xi)(x_1,\ldots,x_i). This is in contrast to the classical adversarial channel in which the corruption is chosen by a channel that has a full knowledge on the sent codeword x\mathbf{x}. In this work we study the capacity of qq-ary online channels for a combined corruption model, in which the channel may impose at most pnpn {\em errors} and at most pβˆ—np^{*} n {\em erasures} on the transmitted codeword. The online channel (in both the error and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we give a full characterization of the capacity as a function of q,pq,p, and pβˆ—p^{*}.Comment: This is a new version of the binary case, which can be found at arXiv:1412.637
    • …
    corecore