377 research outputs found
Deletion codes in the high-noise and high-rate regimes
The noise model of deletions poses significant challenges in coding theory,
with basic questions like the capacity of the binary deletion channel still
being open. In this paper, we study the harder model of worst-case deletions,
with a focus on constructing efficiently decodable codes for the two extreme
regimes of high-noise and high-rate. Specifically, we construct polynomial-time
decodable codes with the following trade-offs (for any eps > 0):
(1) Codes that can correct a fraction 1-eps of deletions with rate poly(eps)
over an alphabet of size poly(1/eps);
(2) Binary codes of rate 1-O~(sqrt(eps)) that can correct a fraction eps of
deletions; and
(3) Binary codes that can be list decoded from a fraction (1/2-eps) of
deletions with rate poly(eps)
Our work is the first to achieve the qualitative goals of correcting a
deletion fraction approaching 1 over bounded alphabets, and correcting a
constant fraction of bit deletions with rate aproaching 1. The above results
bring our understanding of deletion code constructions in these regimes to a
similar level as worst-case errors
Subquadratic time encodable codes beating the Gilbert-Varshamov bound
We construct explicit algebraic geometry codes built from the
Garcia-Stichtenoth function field tower beating the Gilbert-Varshamov bound for
alphabet sizes at least 192. Messages are identied with functions in certain
Riemann-Roch spaces associated with divisors supported on multiple places.
Encoding amounts to evaluating these functions at degree one places. By
exploiting algebraic structures particular to the Garcia-Stichtenoth tower, we
devise an intricate deterministic \omega/2 < 1.19 runtime exponent encoding and
1+\omega/2 < 2.19 expected runtime exponent randomized (unique and list)
decoding algorithms. Here \omega < 2.373 is the matrix multiplication exponent.
If \omega = 2, as widely believed, the encoding and decoding runtimes are
respectively nearly linear and nearly quadratic. Prior to this work, encoding
(resp. decoding) time of code families beating the Gilbert-Varshamov bound were
quadratic (resp. cubic) or worse
It'll probably work out: improved list-decoding through random operations
In this work, we introduce a framework to study the effect of random
operations on the combinatorial list-decodability of a code. The operations we
consider correspond to row and column operations on the matrix obtained from
the code by stacking the codewords together as columns. This captures many
natural transformations on codes, such as puncturing, folding, and taking
subcodes; we show that many such operations can improve the list-decoding
properties of a code. There are two main points to this. First, our goal is to
advance our (combinatorial) understanding of list-decodability, by
understanding what structure (or lack thereof) is necessary to obtain it.
Second, we use our more general results to obtain a few interesting corollaries
for list decoding:
(1) We show the existence of binary codes that are combinatorially
list-decodable from fraction of errors with optimal rate
that can be encoded in linear time.
(2) We show that any code with relative distance, when randomly
folded, is combinatorially list-decodable fraction of errors with
high probability. This formalizes the intuition for why the folding operation
has been successful in obtaining codes with optimal list decoding parameters;
previously, all arguments used algebraic methods and worked only with specific
codes.
(3) We show that any code which is list-decodable with suboptimal list sizes
has many subcodes which have near-optimal list sizes, while retaining the error
correcting capabilities of the original code. This generalizes recent results
where subspace evasive sets have been used to reduce list sizes of codes that
achieve list decoding capacity
Linear-algebraic list decoding of folded Reed-Solomon codes
Folded Reed-Solomon codes are an explicit family of codes that achieve the
optimal trade-off between rate and error-correction capability: specifically,
for any \eps > 0, the author and Rudra (2006,08) presented an n^{O(1/\eps)}
time algorithm to list decode appropriate folded RS codes of rate from a
fraction 1-R-\eps of errors. The algorithm is based on multivariate
polynomial interpolation and root-finding over extension fields. It was noted
by Vadhan that interpolating a linear polynomial suffices if one settles for a
smaller decoding radius (but still enough for a statement of the above form).
Here we give a simple linear-algebra based analysis of this variant that
eliminates the need for the computationally expensive root-finding step over
extension fields (and indeed any mention of extension fields). The entire list
decoding algorithm is linear-algebraic, solving one linear system for the
interpolation step, and another linear system to find a small subspace of
candidate solutions. Except for the step of pruning this subspace, the
algorithm can be implemented to run in {\em quadratic} time. The theoretical
drawback of folded RS codes are that both the decoding complexity and proven
worst-case list-size bound are n^{\Omega(1/\eps)}. By combining the above
idea with a pseudorandom subset of all polynomials as messages, we get a Monte
Carlo construction achieving a list size bound of O(1/\eps^2) which is quite
close to the existential O(1/\eps) bound (however, the decoding complexity
remains n^{\Omega(1/\eps)}). Our work highlights that constructing an
explicit {\em subspace-evasive} subset that has small intersection with
low-dimensional subspaces could lead to explicit codes with better
list-decoding guarantees.Comment: 16 pages. Extended abstract in Proc. of IEEE Conference on
Computational Complexity (CCC), 201
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Optimal rate list decoding via derivative codes
The classical family of Reed-Solomon codes over a field \F_q
consist of the evaluations of polynomials f \in \F_q[X] of degree at
distinct field elements. In this work, we consider a closely related family
of codes, called (order ) {\em derivative codes} and defined over fields of
large characteristic, which consist of the evaluations of as well as its
first formal derivatives at distinct field elements. For large enough
, we show that these codes can be list-decoded in polynomial time from an
error fraction approaching , where is the rate of the code.
This gives an alternate construction to folded Reed-Solomon codes for achieving
the optimal trade-off between rate and list error-correction radius. Our
decoding algorithm is linear-algebraic, and involves solving a linear system to
interpolate a multivariate polynomial, and then solving another structured
linear system to retrieve the list of candidate polynomials . The algorithm
for derivative codes offers some advantages compared to a similar one for
folded Reed-Solomon codes in terms of efficient unique decoding in the presence
of side information.Comment: 11 page
- …