8,021 research outputs found
Two Theorems in List Decoding
We prove the following results concerning the list decoding of
error-correcting codes:
(i) We show that for \textit{any} code with a relative distance of
(over a large enough alphabet), the following result holds for \textit{random
errors}: With high probability, for a \rho\le \delta -\eps fraction of random
errors (for any \eps>0), the received word will have only the transmitted
codeword in a Hamming ball of radius around it. Thus, for random errors,
one can correct twice the number of errors uniquely correctable from worst-case
errors for any code. A variant of our result also gives a simple algorithm to
decode Reed-Solomon codes from random errors that, to the best of our
knowledge, runs faster than known algorithms for certain ranges of parameters.
(ii) We show that concatenated codes can achieve the list decoding capacity
for erasures. A similar result for worst-case errors was proven by Guruswami
and Rudra (SODA 08), although their result does not directly imply our result.
Our results show that a subset of the random ensemble of codes considered by
Guruswami and Rudra also achieve the list decoding capacity for erasures.
Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure
Slepian-Wolf Coding for Broadcasting with Cooperative Base-Stations
We propose a base-station (BS) cooperation model for broadcasting a discrete
memoryless source in a cellular or heterogeneous network. The model allows the
receivers to use helper BSs to improve network performance, and it permits the
receivers to have prior side information about the source. We establish the
model's information-theoretic limits in two operational modes: In Mode 1, the
helper BSs are given information about the channel codeword transmitted by the
main BS, and in Mode 2 they are provided correlated side information about the
source. Optimal codes for Mode 1 use \emph{hash-and-forward coding} at the
helper BSs; while, in Mode 2, optimal codes use source codes from Wyner's
\emph{helper source-coding problem} at the helper BSs. We prove the optimality
of both approaches by way of a new list-decoding generalisation of [8, Thm. 6],
and, in doing so, show an operational duality between Modes 1 and 2.Comment: 16 pages, 1 figur
On the Geometry of Balls in the Grassmannian and List Decoding of Lifted Gabidulin Codes
The finite Grassmannian is defined as the set of all
-dimensional subspaces of the ambient space . Subsets of
the finite Grassmannian are called constant dimension codes and have recently
found an application in random network coding. In this setting codewords from
are sent through a network channel and, since errors may
occur during transmission, the received words can possible lie in
, where . In this paper, we study the balls in
with center that is not necessarily in
. We describe the balls with respect to two different
metrics, namely the subspace and the injection metric. Moreover, we use two
different techniques for describing these balls, one is the Pl\"ucker embedding
of , and the second one is a rational parametrization of
the matrix representation of the codewords.
With these results, we consider the problem of list decoding a certain family
of constant dimension codes, called lifted Gabidulin codes. We describe a way
of representing these codes by linear equations in either the matrix
representation or a subset of the Pl\"ucker coordinates. The union of these
equations and the equations which arise from the description of the ball of a
given radius in the Grassmannian describe the list of codewords with distance
less than or equal to the given radius from the received word.Comment: To be published in Designs, Codes and Cryptography (Springer
The Capacity of Online (Causal) -ary Error-Erasure Channels
In the -ary online (or "causal") channel coding model, a sender wishes to
communicate a message to a receiver by transmitting a codeword symbol by symbol via a channel
limited to at most errors and/or erasures. The channel is
"online" in the sense that at the th step of communication the channel
decides whether to corrupt the th symbol or not based on its view so far,
i.e., its decision depends only on the transmitted symbols .
This is in contrast to the classical adversarial channel in which the
corruption is chosen by a channel that has a full knowledge on the sent
codeword .
In this work we study the capacity of -ary online channels for a combined
corruption model, in which the channel may impose at most {\em errors} and
at most {\em erasures} on the transmitted codeword. The online
channel (in both the error and erasure case) has seen a number of recent
studies which present both upper and lower bounds on its capacity. In this
work, we give a full characterization of the capacity as a function of ,
and .Comment: This is a new version of the binary case, which can be found at
arXiv:1412.637
Generalizations of Fano's Inequality for Conditional Information Measures via Majorization Theory
Fano's inequality is one of the most elementary, ubiquitous, and important
tools in information theory. Using majorization theory, Fano's inequality is
generalized to a broad class of information measures, which contains those of
Shannon and R\'{e}nyi. When specialized to these measures, it recovers and
generalizes the classical inequalities. Key to the derivation is the
construction of an appropriate conditional distribution inducing a desired
marginal distribution on a countably infinite alphabet. The construction is
based on the infinite-dimensional version of Birkhoff's theorem proven by
R\'{e}v\'{e}sz [Acta Math. Hungar. 1962, 3, 188{\textendash}198], and the
constraint of maintaining a desired marginal distribution is similar to
coupling in probability theory. Using our Fano-type inequalities for Shannon's
and R\'{e}nyi's information measures, we also investigate the asymptotic
behavior of the sequence of Shannon's and R\'{e}nyi's equivocations when the
error probabilities vanish. This asymptotic behavior provides a novel
characterization of the asymptotic equipartition property (AEP) via Fano's
inequality.Comment: 44 pages, 3 figure
Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding
We study coding schemes for error correction in interactive communications.
Such interactive coding schemes simulate any -round interactive protocol
using rounds over an adversarial channel that corrupts up to
transmissions. Important performance measures for a coding scheme are its
maximum tolerable error rate , communication complexity , and
computational complexity.
We give the first coding scheme for the standard setting which performs
optimally in all three measures: Our randomized non-adaptive coding scheme has
a near-linear computational complexity and tolerates any error rate with a linear communication complexity. This improves over
prior results which each performed well in two of these measures.
We also give results for other settings of interest, namely, the first
computationally and communication efficient schemes that tolerate adaptively, if only one party is required to
decode, and if list decoding is allowed. These are the
optimal tolerable error rates for the respective settings. These coding schemes
also have near linear computational and communication complexity.
These results are obtained via two techniques: We give a general black-box
reduction which reduces unique decoding, in various settings, to list decoding.
We also show how to boost the computational and communication efficiency of any
list decoder to become near linear.Comment: preliminary versio
Coding theorems for turbo code ensembles
This paper is devoted to a Shannon-theoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are "good" in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a positive number γ0 such that for any binary-input memoryless channel whose Bhattacharyya noise parameter is less than γ0, the average maximum-likelihood (ML) decoder block error probability approaches zero, at least as fast as n -β, where β is the "interleaver gain" exponent defined by Benedetto et al. in 1996
- …