12,392 research outputs found
Asymmetric Evaluations of Erasure and Undetected Error Probabilities
The problem of channel coding with the erasure option is revisited for
discrete memoryless channels. The interplay between the code rate, the
undetected and total error probabilities is characterized. Using the
information spectrum method, a sequence of codes of increasing blocklengths
is designed to illustrate this tradeoff. Furthermore, for additive discrete
memoryless channels with uniform input distribution, we establish that our
analysis is tight with respect to the ensemble average. This is done by
analysing the ensemble performance in terms of a tradeoff between the code
rate, the undetected and the total errors. This tradeoff is parametrized by the
threshold in a generalized likelihood ratio test. Two asymptotic regimes are
studied. First, the code rate tends to the capacity of the channel at a rate
slower than corresponding to the moderate deviations regime. In this
case, both error probabilities decay subexponentially and asymmetrically. The
precise decay rates are characterized. Second, the code rate tends to capacity
at a rate of . In this case, the total error probability is
asymptotically a positive constant while the undetected error probability
decays as for some . The proof techniques involve
applications of a modified (or "shifted") version of the G\"artner-Ellis
theorem and the type class enumerator method to characterize the asymptotic
behavior of a sequence of cumulant generating functions.Comment: 28 pages, no figures in IEEE Transactions on Information Theory, 201
Entanglement-assisted capacity of constrained quantum channel
In this paper we fill the gap in previous works by proving the formula for
entanglement-assisted capacity of quantum channel with additive constraint
(such as bosonic Gaussian channel). The main tools are the coding theorem for
classical-quantum constrained channels and a finite dimensional approximation
of the input density operators for entanglement-assisted capacity. The new
version contains improved formulation of sufficient conditions under which
suprema in the capacity formulas are attained.Comment: Extended version of paper presented at Quantum Informatics Symposium,
Zvenigorod, 1-4.10.200
Density Evolution for Asymmetric Memoryless Channels
Density evolution is one of the most powerful analytical tools for
low-density parity-check (LDPC) codes and graph codes with message passing
decoding algorithms. With channel symmetry as one of its fundamental
assumptions, density evolution (DE) has been widely and successfully applied to
different channels, including binary erasure channels, binary symmetric
channels, binary additive white Gaussian noise channels, etc. This paper
generalizes density evolution for non-symmetric memoryless channels, which in
turn broadens the applications to general memoryless channels, e.g. z-channels,
composite white Gaussian noise channels, etc. The central theorem underpinning
this generalization is the convergence to perfect projection for any fixed size
supporting tree. A new iterative formula of the same complexity is then
presented and the necessary theorems for the performance concentration theorems
are developed. Several properties of the new density evolution method are
explored, including stability results for general asymmetric memoryless
channels. Simulations, code optimizations, and possible new applications
suggested by this new density evolution method are also provided. This result
is also used to prove the typicality of linear LDPC codes among the coset code
ensemble when the minimum check node degree is sufficiently large. It is shown
that the convergence to perfect projection is essential to the belief
propagation algorithm even when only symmetric channels are considered. Hence
the proof of the convergence to perfect projection serves also as a completion
of the theory of classical density evolution for symmetric memoryless channels.Comment: To appear in the IEEE Transactions on Information Theor
Discriminated Belief Propagation
Near optimal decoding of good error control codes is generally a difficult
task. However, for a certain type of (sufficiently) good codes an efficient
decoding algorithm with near optimal performance exists. These codes are
defined via a combination of constituent codes with low complexity trellis
representations. Their decoding algorithm is an instance of (loopy) belief
propagation and is based on an iterative transfer of constituent beliefs. The
beliefs are thereby given by the symbol probabilities computed in the
constituent trellises. Even though weak constituent codes are employed close to
optimal performance is obtained, i.e., the encoder/decoder pair (almost)
achieves the information theoretic capacity. However, (loopy) belief
propagation only performs well for a rather specific set of codes, which limits
its applicability.
In this paper a generalisation of iterative decoding is presented. It is
proposed to transfer more values than just the constituent beliefs. This is
achieved by the transfer of beliefs obtained by independently investigating
parts of the code space. This leads to the concept of discriminators, which are
used to improve the decoder resolution within certain areas and defines
discriminated symbol beliefs. It is shown that these beliefs approximate the
overall symbol probabilities. This leads to an iteration rule that (below
channel capacity) typically only admits the solution of the overall decoding
problem. Via a Gauss approximation a low complexity version of this algorithm
is derived. Moreover, the approach may then be applied to a wide range of
channel maps without significant complexity increase
Concentration of Measure Inequalities in Information Theory, Communications and Coding (Second Edition)
During the last two decades, concentration inequalities have been the subject
of exciting developments in various areas, including convex geometry,
functional analysis, statistical physics, high-dimensional statistics, pure and
applied probability theory, information theory, theoretical computer science,
and learning theory. This monograph focuses on some of the key modern
mathematical tools that are used for the derivation of concentration
inequalities, on their links to information theory, and on their various
applications to communications and coding. In addition to being a survey, this
monograph also includes various new recent results derived by the authors. The
first part of the monograph introduces classical concentration inequalities for
martingales, as well as some recent refinements and extensions. The power and
versatility of the martingale approach is exemplified in the context of codes
defined on graphs and iterative decoding algorithms, as well as codes for
wireless communication. The second part of the monograph introduces the entropy
method, an information-theoretic technique for deriving concentration
inequalities. The basic ingredients of the entropy method are discussed first
in the context of logarithmic Sobolev inequalities, which underlie the
so-called functional approach to concentration of measure, and then from a
complementary information-theoretic viewpoint based on transportation-cost
inequalities and probability in metric spaces. Some representative results on
concentration for dependent random variables are briefly summarized, with
emphasis on their connections to the entropy method. Finally, we discuss
several applications of the entropy method to problems in communications and
coding, including strong converses, empirical distributions of good channel
codes, and an information-theoretic converse for concentration of measure.Comment: Foundations and Trends in Communications and Information Theory, vol.
10, no 1-2, pp. 1-248, 2013. Second edition was published in October 2014.
ISBN to printed book: 978-1-60198-906-
- …