11 research outputs found

    Upper bound on list-decoding radius of binary codes

    Full text link
    Consider the problem of packing Hamming balls of a given relative radius subject to the constraint that they cover any point of the ambient Hamming space with multiplicity at most LL. For odd L3L\ge 3 an asymptotic upper bound on the rate of any such packing is proven. Resulting bound improves the best known bound (due to Blinovsky'1986) for rates below a certain threshold. Method is a superposition of the linear-programming idea of Ashikhmin, Barg and Litsyn (that was used previously to improve the estimates of Blinovsky for L=2L=2) and a Ramsey-theoretic technique of Blinovsky. As an application it is shown that for all odd LL the slope of the rate-radius tradeoff is zero at zero rate.Comment: IEEE Trans. Inform. Theory, accepte

    Multiple Packing: Lower Bounds via Infinite Constellations

    Full text link
    We study the problem of high-dimensional multiple packing in Euclidean space. Multiple packing is a natural generalization of sphere packing and is defined as follows. Let N>0 N>0 and LZ2 L\in\mathbb{Z}_{\ge2} . A multiple packing is a set C\mathcal{C} of points in Rn \mathbb{R}^n such that any point in Rn \mathbb{R}^n lies in the intersection of at most L1 L-1 balls of radius nN \sqrt{nN} around points in C \mathcal{C} . Given a well-known connection with coding theory, multiple packings can be viewed as the Euclidean analog of list-decodable codes, which are well-studied for finite fields. In this paper, we derive the best known lower bounds on the optimal density of list-decodable infinite constellations for constant LL under a stronger notion called average-radius multiple packing. To this end, we apply tools from high-dimensional geometry and large deviation theory.Comment: The paper arXiv:2107.05161 has been split into three parts with new results added and significant revision. This paper is one of the three parts. The other two are arXiv:2211.04408 and arXiv:2211.0440

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity

    A Lower Bound on List Size for List Decoding

    Full text link
    A q-ary error-correcting code C ⊆ {1, 2,..., q} n is said to be list decodable to radius ρ with list size L if every Hamming ball of radius ρ contains at most L codewords of C. We prove that in order for a q-ary code to be list-decodable up to radius (1 − 1/q)(1 − ε)n, we must have L = Ω(1/ε 2). Specifically, we prove that there exists a constant cq> 0 and a function fq such that for small enough ε> 0, if C is list-decodable to radius (1 − 1/q)(1 − ε)n with list size cq/ε 2, then C has at most fq(ε) codewords, independent of n. This result is asymptotically tight (treating q as a constant), since such codes with an exponential (in n) number of codewords are known for list size L = O(1/ε 2). A result similar to ours is implicit in Blinovsky [Bli1] for the binary (q = 2) case. Our proof is simpler and works for all alphabet sizes, and provides more intuition for why the lower bound arises.

    Combinatorial limitations of average-radius list-decoding

    Full text link
    We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of O(1/γ)O(1/\gamma)) and lower bound (of Ωp(log(1/γ))\Omega_p(\log (1/\gamma))) for the list-size needed to decode up to radius pp with rate γ\gamma away from capacity, i.e., 1-\h(p)-\gamma (here p(0,1/2)p\in (0,1/2) and γ>0\gamma > 0). Our main result is the following: We prove that in any binary code C{0,1}nC \subseteq \{0,1\}^n of rate 1-\h(p)-\gamma, there must exist a set LC\mathcal{L} \subset C of Ωp(1/γ)\Omega_p(1/\sqrt{\gamma}) codewords such that the average distance of the points in L\mathcal{L} from their centroid is at most pnpn. In other words, there must exist Ωp(1/γ)\Omega_p(1/\sqrt{\gamma}) codewords with low "average radius." The standard notion of list-decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average-radius form is in itself quite natural and is implied by the classical Johnson bound. The remaining results concern the standard notion of list-decoding, and help clarify the combinatorial landscape of list-decoding: 1. We give a short simple proof, over all fixed alphabets, of the above-mentioned Ωp(log(γ))\Omega_p(\log (\gamma)) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky. 2. We show that one {\em cannot} improve the Ωp(log(1/γ))\Omega_p(\log (1/\gamma)) lower bound via techniques based on identifying the zero-rate regime for list decoding of constant-weight codes. 3. We show a "reverse connection" showing that constant-weight codes for list decoding imply general codes for list decoding with higher rate. 4. We give simple second moment based proofs of tight (up to constant factors) lower bounds on the list-size needed for list decoding random codes and random linear codes from errors as well as erasures.Comment: 28 pages. Extended abstract in RANDOM 201

    Multiple Packing: Lower and Upper Bounds

    Full text link
    We study the problem of high-dimensional multiple packing in Euclidean space. Multiple packing is a natural generalization of sphere packing and is defined as follows. Let N>0 N>0 and LZ2 L\in\mathbb{Z}_{\ge2} . A multiple packing is a set C\mathcal{C} of points in Rn \mathbb{R}^n such that any point in Rn \mathbb{R}^n lies in the intersection of at most L1 L-1 balls of radius nN \sqrt{nN} around points in C \mathcal{C} . We study the multiple packing problem for both bounded point sets whose points have norm at most nP\sqrt{nP} for some constant P>0P>0 and unbounded point sets whose points are allowed to be anywhere in Rn \mathbb{R}^n . Given a well-known connection with coding theory, multiple packings can be viewed as the Euclidean analog of list-decodable codes, which are well-studied for finite fields. In this paper, we derive various bounds on the largest possible density of a multiple packing in both bounded and unbounded settings. A related notion called average-radius multiple packing is also studied. Some of our lower bounds exactly pin down the asymptotics of certain ensembles of average-radius list-decodable codes, e.g., (expurgated) Gaussian codes and (expurgated) spherical codes. In particular, our lower bound obtained from spherical codes is the best known lower bound on the optimal multiple packing density and is the first lower bound that approaches the known large LL limit under the average-radius notion of multiple packing. To derive these results, we apply tools from high-dimensional geometry and large deviation theory.Comment: The paper arXiv:2107.05161 has been split into three parts with new results added and significant revision. This paper is one of the three parts. The other two are arXiv:2211.04408 and arXiv:2211.0440

    List Decoding Random Euclidean Codes and Infinite Constellations

    Get PDF
    We study the list decodability of different ensembles of codes over the real alphabet under the assumption of an omniscient adversary. It is a well-known result that when the source and the adversary have power constraints P P and N N respectively, the list decoding capacity is equal to 12logPN \frac{1}{2}\log\frac{P}{N} . Random spherical codes achieve constant list sizes, and the goal of the present paper is to obtain a better understanding of the smallest achievable list size as a function of the gap to capacity. We show a reduction from arbitrary codes to spherical codes, and derive a lower bound on the list size of typical random spherical codes. We also give an upper bound on the list size achievable using nested Construction-A lattices and infinite Construction-A lattices. We then define and study a class of infinite constellations that generalize Construction-A lattices and prove upper and lower bounds for the same. Other goodness properties such as packing goodness and AWGN goodness of infinite constellations are proved along the way. Finally, we consider random lattices sampled from the Haar distribution and show that if a certain number-theoretic conjecture is true, then the list size grows as a polynomial function of the gap-to-capacity

    Multiple Packing: Lower Bounds via Error Exponents

    Full text link
    We derive lower bounds on the maximal rates for multiple packings in high-dimensional Euclidean spaces. Multiple packing is a natural generalization of the sphere packing problem. For any N>0 N>0 and LZ2 L\in\mathbb{Z}_{\ge2} , a multiple packing is a set C\mathcal{C} of points in Rn \mathbb{R}^n such that any point in Rn \mathbb{R}^n lies in the intersection of at most L1 L-1 balls of radius nN \sqrt{nN} around points in C \mathcal{C} . We study this problem for both bounded point sets whose points have norm at most nP\sqrt{nP} for some constant P>0P>0 and unbounded point sets whose points are allowed to be anywhere in Rn \mathbb{R}^n . Given a well-known connection with coding theory, multiple packings can be viewed as the Euclidean analog of list-decodable codes, which are well-studied for finite fields. We derive the best known lower bounds on the optimal multiple packing density. This is accomplished by establishing a curious inequality which relates the list-decoding error exponent for additive white Gaussian noise channels, a quantity of average-case nature, to the list-decoding radius, a quantity of worst-case nature. We also derive various bounds on the list-decoding error exponent in both bounded and unbounded settings which are of independent interest beyond multiple packing.Comment: The paper arXiv:2107.05161 has been split into three parts with new results added and significant revision. This paper is one of the three parts. The other two are arXiv:2211.04407 and arXiv:2211.0440
    corecore