145 research outputs found
Erasure List-Decodable Codes from Random and Algebraic Geometry Codes
Erasure list decoding was introduced to correct a larger number of erasures
with output of a list of possible candidates. In the present paper, we consider
both random linear codes and algebraic geometry codes for list decoding erasure
errors. The contributions of this paper are two-fold. Firstly, we show that,
for arbitrary ( and are independent),
with high probability a random linear code is an erasure list decodable code
with constant list size that can correct a fraction
of erasures, i.e., a random linear code achieves the
information-theoretic optimal trade-off between information rate and fraction
of erasure errors. Secondly, we show that algebraic geometry codes are good
erasure list-decodable codes. Precisely speaking, for any and
, a -ary algebraic geometry code of rate from the
Garcia-Stichtenoth tower can correct
fraction of erasure errors with
list size . This improves the Johnson bound applied to algebraic
geometry codes. Furthermore, list decoding of these algebraic geometry codes
can be implemented in polynomial time
Two Theorems in List Decoding
We prove the following results concerning the list decoding of
error-correcting codes:
(i) We show that for \textit{any} code with a relative distance of
(over a large enough alphabet), the following result holds for \textit{random
errors}: With high probability, for a \rho\le \delta -\eps fraction of random
errors (for any \eps>0), the received word will have only the transmitted
codeword in a Hamming ball of radius around it. Thus, for random errors,
one can correct twice the number of errors uniquely correctable from worst-case
errors for any code. A variant of our result also gives a simple algorithm to
decode Reed-Solomon codes from random errors that, to the best of our
knowledge, runs faster than known algorithms for certain ranges of parameters.
(ii) We show that concatenated codes can achieve the list decoding capacity
for erasures. A similar result for worst-case errors was proven by Guruswami
and Rudra (SODA 08), although their result does not directly imply our result.
Our results show that a subset of the random ensemble of codes considered by
Guruswami and Rudra also achieve the list decoding capacity for erasures.
Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure
Linear-time list recovery of high-rate expander codes
We show that expander codes, when properly instantiated, are high-rate list
recoverable codes with linear-time list recovery algorithms. List recoverable
codes have been useful recently in constructing efficiently list-decodable
codes, as well as explicit constructions of matrices for compressive sensing
and group testing. Previous list recoverable codes with linear-time decoding
algorithms have all had rate at most 1/2; in contrast, our codes can have rate
for any . We can plug our high-rate codes into a
construction of Meir (2014) to obtain linear-time list recoverable codes of
arbitrary rates, which approach the optimal trade-off between the number of
non-trivial lists provided and the rate of the code. While list-recovery is
interesting on its own, our primary motivation is applications to
list-decoding. A slight strengthening of our result would implies linear-time
and optimally list-decodable codes for all rates, and our work is a step in the
direction of solving this important problem
Algebraic Methods in Computational Complexity
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test
are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques
AG codes achieve list decoding capacity over contant-sized fields
The recently-emerging field of higher order MDS codes has sought to unify a
number of concepts in coding theory. Such areas captured by higher order MDS
codes include maximally recoverable (MR) tensor codes, codes with optimal
list-decoding guarantees, and codes with constrained generator matrices (as in
the GM-MDS theorem).
By proving these equivalences, Brakensiek-Gopi-Makam showed the existence of
optimally list-decodable Reed-Solomon codes over exponential sized fields.
Building on this, recent breakthroughs by Guo-Zhang and Alrabiah-Guruswami-Li
have shown that randomly punctured Reed-Solomon codes achieve list-decoding
capacity (which is a relaxation of optimal list-decodability) over linear size
fields. We extend these works by developing a formal theory of relaxed higher
order MDS codes. In particular, we show that there are two inequivalent
relaxations which we call lower and upper relaxations. The lower relaxation is
equivalent to relaxed optimal list-decodable codes and the upper relaxation is
equivalent to relaxed MR tensor codes with a single parity check per column.
We then generalize the techniques of GZ and AGL to show that both these
relaxations can be constructed over constant size fields by randomly puncturing
suitable algebraic-geometric codes. For this, we crucially use the generalized
GM-MDS theorem for polynomial codes recently proved by Brakensiek-Dhar-Gopi. We
obtain the following corollaries from our main result. First, randomly
punctured AG codes of rate achieve list-decoding capacity with list size
and field size . Prior to this work, AG
codes were not even known to achieve list-decoding capacity. Second, by
randomly puncturing AG codes, we can construct relaxed MR tensor codes with a
single parity check per column over constant-sized fields, whereas
(non-relaxed) MR tensor codes require exponential field size.Comment: 38 page
Efficiently decoding Reed-Muller codes from random errors
Reed-Muller codes encode an -variate polynomial of degree by
evaluating it on all points in . We denote this code by .
The minimal distance of is and so it cannot correct more
than half that number of errors in the worst case. For random errors one may
hope for a better result.
In this work we give an efficient algorithm (in the block length ) for
decoding random errors in Reed-Muller codes far beyond the minimal distance.
Specifically, for low rate codes (of degree ) we can correct a
random set of errors with high probability. For high rate codes
(of degree for ), we can correct roughly
errors.
More generally, for any integer , our algorithm can correct any error
pattern in for which the same erasure pattern can be corrected
in . The results above are obtained by applying recent results
of Abbe, Shpilka and Wigderson (STOC, 2015), Kumar and Pfister (2015) and
Kudekar et al. (2015) regarding the ability of Reed-Muller codes to correct
random erasures.
The algorithm is based on solving a carefully defined set of linear equations
and thus it is significantly different than other algorithms for decoding
Reed-Muller codes that are based on the recursive structure of the code. It can
be seen as a more explicit proof of a result of Abbe et al. that shows a
reduction from correcting erasures to correcting errors, and it also bares some
similarities with the famous Berlekamp-Welch algorithm for decoding
Reed-Solomon codes.Comment: 18 pages, 2 figure
Pauli Manipulation Detection codes and Applications to Quantum Communication over Adversarial Channels
We introduce and explicitly construct a quantum code we coin a "Pauli
Manipulation Detection" code (or PMD), which detects every Pauli error with
high probability. We apply them to construct the first near-optimal codes for
two tasks in quantum communication over adversarial channels. Our main
application is an approximate quantum code over qubits which can efficiently
correct from a number of (worst-case) erasure errors approaching the quantum
Singleton bound. Our construction is based on the composition of a PMD code
with a stabilizer code which is list-decodable from erasures.
Our second application is a quantum authentication code for "qubit-wise"
channels, which does not require a secret key. Remarkably, this gives an
example of a task in quantum communication which is provably impossible
classically. Our construction is based on a combination of PMD codes,
stabilizer codes, and classical non-malleable codes (Dziembowski et al., 2009),
and achieves "minimal redundancy" (rate )
- …