36 research outputs found

    Information-Theoretic Foundations of Mismatched Decoding

    Full text link
    Shannon's channel coding theorem characterizes the maximal rate of information that can be reliably transmitted over a communication channel when optimal encoding and decoding strategies are used. In many scenarios, however, practical considerations such as channel uncertainty and implementation constraints rule out the use of an optimal decoder. The mismatched decoding problem addresses such scenarios by considering the case that the decoder cannot be optimized, but is instead fixed as part of the problem statement. This problem is not only of direct interest in its own right, but also has close connections with other long-standing theoretical problems in information theory. In this monograph, we survey both classical literature and recent developments on the mismatched decoding problem, with an emphasis on achievable random-coding rates for memoryless channels. We present two widely-considered achievable rates known as the generalized mutual information (GMI) and the LM rate, and overview their derivations and properties. In addition, we survey several improved rates via multi-user coding techniques, as well as recent developments and challenges in establishing upper bounds on the mismatch capacity, and an analogous mismatched encoding problem in rate-distortion theory. Throughout the monograph, we highlight a variety of applications and connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information Theory (Volume 17, Issue 2-3

    Asymmetric Evaluations of Erasure and Undetected Error Probabilities

    Full text link
    The problem of channel coding with the erasure option is revisited for discrete memoryless channels. The interplay between the code rate, the undetected and total error probabilities is characterized. Using the information spectrum method, a sequence of codes of increasing blocklengths nn is designed to illustrate this tradeoff. Furthermore, for additive discrete memoryless channels with uniform input distribution, we establish that our analysis is tight with respect to the ensemble average. This is done by analysing the ensemble performance in terms of a tradeoff between the code rate, the undetected and the total errors. This tradeoff is parametrized by the threshold in a generalized likelihood ratio test. Two asymptotic regimes are studied. First, the code rate tends to the capacity of the channel at a rate slower than n1/2n^{-1/2} corresponding to the moderate deviations regime. In this case, both error probabilities decay subexponentially and asymmetrically. The precise decay rates are characterized. Second, the code rate tends to capacity at a rate of n1/2n^{-1/2}. In this case, the total error probability is asymptotically a positive constant while the undetected error probability decays as exp(bn1/2)\exp(- b n^{ 1/2}) for some b>0b>0. The proof techniques involve applications of a modified (or "shifted") version of the G\"artner-Ellis theorem and the type class enumerator method to characterize the asymptotic behavior of a sequence of cumulant generating functions.Comment: 28 pages, no figures in IEEE Transactions on Information Theory, 201

    Nonasymptotic noisy lossy source coding

    Get PDF
    This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common rate-distortion function, as evidenced both by the refined asymptotic analysis (dispersion) and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source.Comment: IEEE Transactions on Information Theory, 201

    Variable-length compression allowing errors

    Get PDF
    This paper studies the fundamental limits of the minimum average length of lossless and lossy variable-length compression, allowing a nonzero error probability ϵ\epsilon, for lossless compression. We give non-asymptotic bounds on the minimum average length in terms of Erokhin's rate-distortion function and we use those bounds to obtain a Gaussian approximation on the speed of approach to the limit which is quite accurate for all but small blocklengths: (1ϵ)kH(S)kV(S)2πe(Q1(ϵ))22(1 - \epsilon) k H(\mathsf S) - \sqrt{\frac{k V(\mathsf S)}{2 \pi} } e^{- \frac {(Q^{-1}(\epsilon))^2} 2 } where Q1()Q^{-1}(\cdot) is the functional inverse of the standard Gaussian complementary cdf, and V(S)V(\mathsf S) is the source dispersion. A nonzero error probability thus not only reduces the asymptotically achievable rate by a factor of 1ϵ1 - \epsilon, but this asymptotic limit is approached from below, i.e. larger source dispersions and shorter blocklengths are beneficial. Variable-length lossy compression under an excess distortion constraint is shown to exhibit similar properties

    Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities

    Full text link
    This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities. In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second-order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research.Comment: Further comments welcom
    corecore