354 research outputs found
A General Formula for the Mismatch Capacity
The fundamental limits of channels with mismatched decoding are addressed. A
general formula is established for the mismatch capacity of a general channel,
defined as a sequence of conditional distributions with a general decoding
metrics sequence. We deduce an identity between the Verd\'{u}-Han general
channel capacity formula, and the mismatch capacity formula applied to Maximum
Likelihood decoding metric. Further, several upper bounds on the capacity are
provided, and a simpler expression for a lower bound is derived for the case of
a non-negative decoding metric. The general formula is specialized to the case
of finite input and output alphabet channels with a type-dependent metric. The
closely related problem of threshold mismatched decoding is also studied, and a
general expression for the threshold mismatch capacity is obtained. As an
example of threshold mismatch capacity, we state a general expression for the
erasures-only capacity of the finite input and output alphabet channel. We
observe that for every channel there exists a (matched) threshold decoder which
is capacity achieving. Additionally, necessary and sufficient conditions are
stated for a channel to have a strong converse. Csisz\'{a}r and Narayan's
conjecture is proved for bounded metrics, providing a positive answer to the
open problem introduced in [1], i.e., that the "product-space" improvement of
the lower random coding bound, , is indeed the mismatch
capacity of the discrete memoryless channel . We conclude by presenting an
identity between the threshold capacity and in the DMC
case
Information-Theoretic Foundations of Mismatched Decoding
Shannon's channel coding theorem characterizes the maximal rate of
information that can be reliably transmitted over a communication channel when
optimal encoding and decoding strategies are used. In many scenarios, however,
practical considerations such as channel uncertainty and implementation
constraints rule out the use of an optimal decoder. The mismatched decoding
problem addresses such scenarios by considering the case that the decoder
cannot be optimized, but is instead fixed as part of the problem statement.
This problem is not only of direct interest in its own right, but also has
close connections with other long-standing theoretical problems in information
theory. In this monograph, we survey both classical literature and recent
developments on the mismatched decoding problem, with an emphasis on achievable
random-coding rates for memoryless channels. We present two widely-considered
achievable rates known as the generalized mutual information (GMI) and the LM
rate, and overview their derivations and properties. In addition, we survey
several improved rates via multi-user coding techniques, as well as recent
developments and challenges in establishing upper bounds on the mismatch
capacity, and an analogous mismatched encoding problem in rate-distortion
theory. Throughout the monograph, we highlight a variety of applications and
connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information
Theory (Volume 17, Issue 2-3
Random Coding Error Exponents for the Two-User Interference Channel
This paper is about deriving lower bounds on the error exponents for the
two-user interference channel under the random coding regime for several
ensembles. Specifically, we first analyze the standard random coding ensemble,
where the codebooks are comprised of independently and identically distributed
(i.i.d.) codewords. For this ensemble, we focus on optimum decoding, which is
in contrast to other, suboptimal decoding rules that have been used in the
literature (e.g., joint typicality decoding, treating interference as noise,
etc.). The fact that the interfering signal is a codeword, rather than an
i.i.d. noise process, complicates the application of conventional techniques of
performance analysis of the optimum decoder. Also, unfortunately, these
conventional techniques result in loose bounds. Using analytical tools rooted
in statistical physics, as well as advanced union bounds, we derive
single-letter formulas for the random coding error exponents. We compare our
results with the best known lower bound on the error exponent, and show that
our exponents can be strictly better. Then, in the second part of this paper,
we consider more complicated coding ensembles, and find a lower bound on the
error exponent associated with the celebrated Han-Kobayashi (HK) random coding
ensemble, which is based on superposition coding.Comment: accepted IEEE Transactions on Information Theor
The Dispersion of Nearest-Neighbor Decoding for Additive Non-Gaussian Channels
We study the second-order asymptotics of information transmission using
random Gaussian codebooks and nearest neighbor (NN) decoding over a
power-limited stationary memoryless additive non-Gaussian noise channel. We
show that the dispersion term depends on the non-Gaussian noise only through
its second and fourth moments, thus complementing the capacity result
(Lapidoth, 1996), which depends only on the second moment. Furthermore, we
characterize the second-order asymptotics of point-to-point codes over
-sender interference networks with non-Gaussian additive noise.
Specifically, we assume that each user's codebook is Gaussian and that NN
decoding is employed, i.e., that interference from the unintended users
(Gaussian interfering signals) is treated as noise at each decoder. We show
that while the first-order term in the asymptotic expansion of the maximum
number of messages depends on the power of the interferring codewords only
through their sum, this does not hold for the second-order term.Comment: 12 pages, 3 figures, IEEE Transactions on Information Theor
- …