796 research outputs found
Deletion codes in the high-noise and high-rate regimes
The noise model of deletions poses significant challenges in coding theory,
with basic questions like the capacity of the binary deletion channel still
being open. In this paper, we study the harder model of worst-case deletions,
with a focus on constructing efficiently decodable codes for the two extreme
regimes of high-noise and high-rate. Specifically, we construct polynomial-time
decodable codes with the following trade-offs (for any eps > 0):
(1) Codes that can correct a fraction 1-eps of deletions with rate poly(eps)
over an alphabet of size poly(1/eps);
(2) Binary codes of rate 1-O~(sqrt(eps)) that can correct a fraction eps of
deletions; and
(3) Binary codes that can be list decoded from a fraction (1/2-eps) of
deletions with rate poly(eps)
Our work is the first to achieve the qualitative goals of correcting a
deletion fraction approaching 1 over bounded alphabets, and correcting a
constant fraction of bit deletions with rate aproaching 1. The above results
bring our understanding of deletion code constructions in these regimes to a
similar level as worst-case errors
A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation
Two Theorems in List Decoding
We prove the following results concerning the list decoding of
error-correcting codes:
(i) We show that for \textit{any} code with a relative distance of
(over a large enough alphabet), the following result holds for \textit{random
errors}: With high probability, for a \rho\le \delta -\eps fraction of random
errors (for any \eps>0), the received word will have only the transmitted
codeword in a Hamming ball of radius around it. Thus, for random errors,
one can correct twice the number of errors uniquely correctable from worst-case
errors for any code. A variant of our result also gives a simple algorithm to
decode Reed-Solomon codes from random errors that, to the best of our
knowledge, runs faster than known algorithms for certain ranges of parameters.
(ii) We show that concatenated codes can achieve the list decoding capacity
for erasures. A similar result for worst-case errors was proven by Guruswami
and Rudra (SODA 08), although their result does not directly imply our result.
Our results show that a subset of the random ensemble of codes considered by
Guruswami and Rudra also achieve the list decoding capacity for erasures.
Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure
A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code
Efficiently decoding Reed-Muller codes from random errors
Reed-Muller codes encode an -variate polynomial of degree by
evaluating it on all points in . We denote this code by .
The minimal distance of is and so it cannot correct more
than half that number of errors in the worst case. For random errors one may
hope for a better result.
In this work we give an efficient algorithm (in the block length ) for
decoding random errors in Reed-Muller codes far beyond the minimal distance.
Specifically, for low rate codes (of degree ) we can correct a
random set of errors with high probability. For high rate codes
(of degree for ), we can correct roughly
errors.
More generally, for any integer , our algorithm can correct any error
pattern in for which the same erasure pattern can be corrected
in . The results above are obtained by applying recent results
of Abbe, Shpilka and Wigderson (STOC, 2015), Kumar and Pfister (2015) and
Kudekar et al. (2015) regarding the ability of Reed-Muller codes to correct
random erasures.
The algorithm is based on solving a carefully defined set of linear equations
and thus it is significantly different than other algorithms for decoding
Reed-Muller codes that are based on the recursive structure of the code. It can
be seen as a more explicit proof of a result of Abbe et al. that shows a
reduction from correcting erasures to correcting errors, and it also bares some
similarities with the famous Berlekamp-Welch algorithm for decoding
Reed-Solomon codes.Comment: 18 pages, 2 figure
List Decoding Tensor Products and Interleaved Codes
We design the first efficient algorithms and prove new combinatorial bounds
for list decoding tensor products of codes and interleaved codes. We show that
for {\em every} code, the ratio of its list decoding radius to its minimum
distance stays unchanged under the tensor product operation (rather than
squaring, as one might expect). This gives the first efficient list decoders
and new combinatorial bounds for some natural codes including multivariate
polynomials where the degree in each variable is bounded. We show that for {\em
every} code, its list decoding radius remains unchanged under -wise
interleaving for an integer . This generalizes a recent result of Dinur et
al \cite{DGKS}, who proved such a result for interleaved Hadamard codes
(equivalently, linear transformations). Using the notion of generalized Hamming
weights, we give better list size bounds for {\em both} tensoring and
interleaving of binary linear codes. By analyzing the weight distribution of
these codes, we reduce the task of bounding the list size to bounding the
number of close-by low-rank codewords. For decoding linear transformations,
using rank-reduction together with other ideas, we obtain list size bounds that
are tight over small fields.Comment: 32 page
Fast transform decoding of nonsystematic Reed-Solomon codes
A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips
- …