2 research outputs found
On list recovery of high-rate tensor codes
We continue the study of list recovery properties of high-rate tensor codes, initiated by Hemenway, Ron-Zewi, and Wootters (FOCS’17). In that work it was shown that the tensor product of an efficient (poly-time) high-rate globally list recoverable code is approximately locally list recoverable, as well as globally list recoverable in probabilistic near-linear time. This was used in turn to give the first capacity-achieving list decodable codes with (1) local list decoding algorithms, and with (2) probabilistic near-linear time global list decoding algorithms. This also yielded constant-rate codes approaching the Gilbert-Varshamov bound with probabilistic near-linear time global unique decoding algorithms. In the current work we obtain the following results: 1. The tensor product of an efficient (poly-time) high-rate globally list recoverable code is globally list recoverable in deterministic near-linear time. This yields in turn the first capacity-achieving list decodable codes with deterministic near-linear time global list decoding algorithms. It also gives constant-rate codes approaching the Gilbert-Varshamov bound with deterministic near-linear time global unique decoding algorithms. 2. If the base code is additionally locally correctable, then the tensor product is (genuinely) locally list recoverable. This yields in turn (non-explicit) constant-rate codes approaching the Gilbert- Varshamov bound that are locally correctable with query complexity and running time No(1). This improves over prior work by Gopi et. al. (SODA’17; IEEE Transactions on Information Theory’18) that only gave query complexity N" with rate that is exponentially small in 1/". 3. A nearly-tight combinatorial lower bound on output list size for list recovering high-rate tensor codes. This bound implies in turn a nearly-tight lower bound of N (1/ log logN) on the product of query complexity and output list size for locally list recovering high-rate tensor codes.</p
Bounds for list-decoding and list-recovery of random linear codes
A family of error-correcting codes is listdecodable from error fraction p if, for every code in the family, the number of codewords in any Hamming ball of fractional radius p is less than some integer L. It is said to be list-recoverable for input list size ℓ if for every sufficiently large subset of at least L codewords, there is a coordinate where the codewords take more than ℓ values. In this work, we study the list size of random linear codes for both list-decoding and list-recovery as the rate approaches capacity. We show the following claims hold with high probability over the choice of the code (below q is the alphabet size, and ε > 0 is the gap to capacity). (1) A random linear code of rate 1 - logq(ℓ)-ε requires list size L ≥ ℓΩ(1/ε) for list-recovery from input list size ℓ. (2) A random linear code of rate 1 - hq(p) - ε requires list size L ≥ ⌊hq(p)/ε + 0.99⌋ for list-decoding from error fraction p. (3) A random binary linear code of rate 1 - h2(p) - ε is list-decodable from average error fraction p with list size with L ≤ ⌊h2(p)/ε⌋ + 2. Our lower bounds follow by exhibiting an explicit subset of codewords so that this subset—or some symbol-wise permutation of it—lies in a random linear code with high probability. Our upper bound follows by strengthening a result of (Li, Wootters, 2018)