4,163 research outputs found
Two Theorems in List Decoding
We prove the following results concerning the list decoding of
error-correcting codes:
(i) We show that for \textit{any} code with a relative distance of
(over a large enough alphabet), the following result holds for \textit{random
errors}: With high probability, for a \rho\le \delta -\eps fraction of random
errors (for any \eps>0), the received word will have only the transmitted
codeword in a Hamming ball of radius around it. Thus, for random errors,
one can correct twice the number of errors uniquely correctable from worst-case
errors for any code. A variant of our result also gives a simple algorithm to
decode Reed-Solomon codes from random errors that, to the best of our
knowledge, runs faster than known algorithms for certain ranges of parameters.
(ii) We show that concatenated codes can achieve the list decoding capacity
for erasures. A similar result for worst-case errors was proven by Guruswami
and Rudra (SODA 08), although their result does not directly imply our result.
Our results show that a subset of the random ensemble of codes considered by
Guruswami and Rudra also achieve the list decoding capacity for erasures.
Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure
Analysis and Design of Tuned Turbo Codes
It has been widely observed that there exists a fundamental trade-off between
the minimum (Hamming) distance properties and the iterative decoding
convergence behavior of turbo-like codes. While capacity achieving code
ensembles typically are asymptotically bad in the sense that their minimum
distance does not grow linearly with block length, and they therefore exhibit
an error floor at moderate-to-high signal to noise ratios, asymptotically good
codes usually converge further away from channel capacity. In this paper, we
introduce the concept of tuned turbo codes, a family of asymptotically good
hybrid concatenated code ensembles, where asymptotic minimum distance growth
rates, convergence thresholds, and code rates can be traded-off using two
tuning parameters, {\lambda} and {\mu}. By decreasing {\lambda}, the asymptotic
minimum distance growth rate is reduced in exchange for improved iterative
decoding convergence behavior, while increasing {\lambda} raises the asymptotic
minimum distance growth rate at the expense of worse convergence behavior, and
thus the code performance can be tuned to fit the desired application. By
decreasing {\mu}, a similar tuning behavior can be achieved for higher rate
code ensembles.Comment: Accepted for publication in IEEE Transactions on Information Theor
Deterministic Rateless Codes for BSC
A rateless code encodes a finite length information word into an infinitely
long codeword such that longer prefixes of the codeword can tolerate a larger
fraction of errors. A rateless code achieves capacity for a family of channels
if, for every channel in the family, reliable communication is obtained by a
prefix of the code whose rate is arbitrarily close to the channel's capacity.
As a result, a universal encoder can communicate over all channels in the
family while simultaneously achieving optimal communication overhead. In this
paper, we construct the first \emph{deterministic} rateless code for the binary
symmetric channel. Our code can be encoded and decoded in time per
bit and in almost logarithmic parallel time of , where
is any (arbitrarily slow) super-constant function. Furthermore, the error
probability of our code is almost exponentially small .
Previous rateless codes are probabilistic (i.e., based on code ensembles),
require polynomial time per bit for decoding, and have inferior asymptotic
error probabilities. Our main technical contribution is a constructive proof
for the existence of an infinite generating matrix that each of its prefixes
induce a weight distribution that approximates the expected weight distribution
of a random linear code
- …