456 research outputs found
Mismatched Multi-Letter Successive Decoding for the Multiple-Access Channel
This paper studies channel coding for the discrete memoryless multiple-access channel with a given (possibly suboptimal) decoding rule. A multi-letter successive decoding rule depending on an arbitrary non-negative decoding metric is considered, and achievable rate regions and error exponents are derived both for the standard MAC (independent codebooks), and for the cognitive MAC (one user knows both messages) with superposition coding. In the cognitive case, the rate region and error exponent are shown to be tight with respect to the ensemble average. The rate regions are compared with those of the commonly considered decoder that chooses the message pair maximizing the decoding metric, and numerical examples are given for which successive decoding yields a strictly higher sum rate for a given pair of input distributions.This work was supported in part by the European Research Council through ERC under Grant 259663 and Grant 725411, in part by the European Union’s 7th Framework Programme under Grant 303633, and in part by the Spanish Ministry of Economy and Competitiveness under Grant RYC-2011-08150, Grant TEC2012-38800-C03- 03, and Grant TEC2016-78434-C3-1-R
Information-Theoretic Foundations of Mismatched Decoding
Shannon's channel coding theorem characterizes the maximal rate of
information that can be reliably transmitted over a communication channel when
optimal encoding and decoding strategies are used. In many scenarios, however,
practical considerations such as channel uncertainty and implementation
constraints rule out the use of an optimal decoder. The mismatched decoding
problem addresses such scenarios by considering the case that the decoder
cannot be optimized, but is instead fixed as part of the problem statement.
This problem is not only of direct interest in its own right, but also has
close connections with other long-standing theoretical problems in information
theory. In this monograph, we survey both classical literature and recent
developments on the mismatched decoding problem, with an emphasis on achievable
random-coding rates for memoryless channels. We present two widely-considered
achievable rates known as the generalized mutual information (GMI) and the LM
rate, and overview their derivations and properties. In addition, we survey
several improved rates via multi-user coding techniques, as well as recent
developments and challenges in establishing upper bounds on the mismatch
capacity, and an analogous mismatched encoding problem in rate-distortion
theory. Throughout the monograph, we highlight a variety of applications and
connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information
Theory (Volume 17, Issue 2-3
Random Coding Error Exponents for the Two-User Interference Channel
This paper is about deriving lower bounds on the error exponents for the
two-user interference channel under the random coding regime for several
ensembles. Specifically, we first analyze the standard random coding ensemble,
where the codebooks are comprised of independently and identically distributed
(i.i.d.) codewords. For this ensemble, we focus on optimum decoding, which is
in contrast to other, suboptimal decoding rules that have been used in the
literature (e.g., joint typicality decoding, treating interference as noise,
etc.). The fact that the interfering signal is a codeword, rather than an
i.i.d. noise process, complicates the application of conventional techniques of
performance analysis of the optimum decoder. Also, unfortunately, these
conventional techniques result in loose bounds. Using analytical tools rooted
in statistical physics, as well as advanced union bounds, we derive
single-letter formulas for the random coding error exponents. We compare our
results with the best known lower bound on the error exponent, and show that
our exponents can be strictly better. Then, in the second part of this paper,
we consider more complicated coding ensembles, and find a lower bound on the
error exponent associated with the celebrated Han-Kobayashi (HK) random coding
ensemble, which is based on superposition coding.Comment: accepted IEEE Transactions on Information Theor
Recommended from our members
Information-Theoretic Foundations of Mismatched Decoding
Shannon’s channel coding theorem characterizes the maximal rate of information that can be reliably transmitted over a communication channel when optimal encoding and decoding strategies are used. In many scenarios, however, practical considerations such as channel uncertainty and implementation constraints rule out the use of an optimal decoder. The mismatched decoding problem addresses such scenarios by considering the case that the decoder cannot be optimized, but is instead fixed as part of the problem statement. This problem is not only of direct interest in its own right, but also has close connections with other long-standing theoretical problems in information theory.
In this monograph, we survey both classical literature and recent developments on the mismatched decoding problem, with an emphasis on achievable random-coding rates for memoryless channels. We present two widely-considered achievable rates known as the generalized mutual information (GMI) and the LM rate, and overview their derivations and properties. In addition, we survey several improved rates via multi-user coding techniques, as well as recent developments and challenges in establishing upper bounds on the mismatch capacity, and an analogous mismatched encoding problem in rate-distortion theory. Throughout the monograph, we highlight a variety of applications and connections with other prominent information theory problems
Optimization of Information Rate Upper and Lower Bounds for Channels with Memory
We consider the problem of minimizing upper bounds and maximizing lower
bounds on information rates of stationary and ergodic discrete-time channels
with memory. The channels we consider can have a finite number of states, such
as partial response channels, or they can have an infinite state-space, such as
time-varying fading channels. We optimize recently-proposed information rate
bounds for such channels, which make use of auxiliary finite-state machine
channels (FSMCs). Our main contribution in this paper is to provide iterative
expectation-maximization (EM) type algorithms to optimize the parameters of the
auxiliary FSMC to tighten these bounds. We provide an explicit, iterative
algorithm that improves the upper bound at each iteration. We also provide an
effective method for iteratively optimizing the lower bound. To demonstrate the
effectiveness of our algorithms, we provide several examples of partial
response and fading channels, where the proposed optimization techniques
significantly tighten the initial upper and lower bounds. Finally, we compare
our results with an improved variation of the \emph{simplex} local optimization
algorithm, called \emph{Soblex}. This comparison shows that our proposed
algorithms are superior to the Soblex method, both in terms of robustness in
finding the tightest bounds and in computational efficiency. Interestingly,
from a channel coding/decoding perspective, optimizing the lower bound is
related to increasing the achievable mismatched information rate, i.e., the
information rate of a communication system where the decoder at the receiver is
matched to the auxiliary channel, and not to the original channel.Comment: Submitted to IEEE Transactions on Information Theory, November 24,
200
CHARACTERIZATION OF FUNDAMENTAL COMMUNICATION LIMITS OF STATE-DEPENDENT INTERFERENCE NETWORKS
Interference management is one of the key techniques that drive evolution of wireless networks from one generation to another. Techniques in current cellular networks to deal with interference follow the basic principle of orthogonalizing transmissions in time, frequency, code, and space. My PhD work investigate information theoretic models that represent a new perspective/technique for interference management. The idea is to explore the fact that an interferer knows the interference that it causes to other users noncausally and can/should exploit such information for canceling the interference. In this way, users can transmit simultaneously and the throughput of wireless networks can be substantially improved. We refer to the interference treated in such a way as ``dirty interference\u27\u27 or noncausal state .
Towards designing a dirty interference cancelation framework, my PhD thesis investigates two classes of information theoretic models and develops dirty interference cancelation schemes that achieve the fundamental communication limits. One class of models (referred to as state-dependent interference channels) capture the scenarios that users help each other to cancel dirty interference. The other class of models (referred to as state-dependent channels with helper) capture the scenarios that one dominate user interferes a number of other users and assists those users to cancel its dirty interference. For both classes of models, we develop dirty interference cancelation schemes and compared the corresponding achievable rate regions (i.e., inner bounds on the capacity region) with the outer bounds on the capacity region. We characterize the channel parameters under which the developed inner bounds meet the outer bounds either partially of fully, and thus establish the capacity regions or partial boundaries of the capacity regions
Information Theoretic Limits of State-dependent Networks
We investigate the information theoretic limits of two types of state-dependent models in this dissertation. These models capture a wide range of wireless communication scenarios where there are interference cognition among transmitters. Hence, information theoretic studies of these models provide useful guidelines for designing new interference cancellation schemes in practical wireless networks.
In particular, we first study the two-user state-dependent Gaussian multiple access channel (MAC) with a helper. The channel is corrupted by an additive Gaussian state sequence known to neither the transmitters nor the receiver, but to a helper noncausally, which assists state cancellation at the receiver. Inner and outer bounds on the capacity region are first derived, which improve the state-of-the-art bounds given in the literature. Further comparison of these bounds yields either segments on the capacity region boundary or the full capacity region by considering various regimes of channel parameters.
We then study the two-user Gaussian state-dependent Z-interference channel (Z-IC), in which two receivers are corrupted respectively by two correlated states that are noncausally known to transmitters, but unknown to receivers. Three interference regimes are studied, and the capacity region or the sum capacity boundary is characterized either fully or partially under various channel parameters. The impact of the correlation between the states on the cancellation of state and interference as well as the achievability of the capacity is demonstrated via numerical analysis.
Finally, we extend our results on the state-dependent Z-IC to the state-dependent regular IC. As both receivers in the regular IC are interfered, more sophisticated achievable schemes are designed. For the very strong regime, the capacity region is achieved by a scheme where the two transmitters implement a cooperative dirty paper coding. For the strong but not very strong regime, the sum-rate capacity is characterized by rate splitting, layered dirty paper coding and successive cancellation. For the weak regime, the sum-rate capacity is achieved via dirty paper coding individually at two receivers as well as treating interference as noise. Numerical investigation indicates that for the regular IC, the correlation between states impacts the achievability of the channel capacity in a different way from that of the Z-IC
- …