7,558 research outputs found

    On Relations Between the Relative entropy and χ2\chi^2-Divergence, Generalizations and Applications

    Full text link
    The relative entropy and chi-squared divergence are fundamental divergence measures in information theory and statistics. This paper is focused on a study of integral relations between the two divergences, the implications of these relations, their information-theoretic applications, and some generalizations pertaining to the rich class of ff-divergences. Applications that are studied in this paper refer to lossless compression, the method of types and large deviations, strong~data-processing inequalities, bounds on contraction coefficients and maximal correlation, and the convergence rate to stationarity of a type of discrete-time Markov chains.Comment: Published in the Entropy journal, May 18, 2020. Journal version (open access) is available at https://www.mdpi.com/1099-4300/22/5/56

    Concentration of Measure Inequalities in Information Theory, Communications and Coding (Second Edition)

    Full text link
    During the last two decades, concentration inequalities have been the subject of exciting developments in various areas, including convex geometry, functional analysis, statistical physics, high-dimensional statistics, pure and applied probability theory, information theory, theoretical computer science, and learning theory. This monograph focuses on some of the key modern mathematical tools that are used for the derivation of concentration inequalities, on their links to information theory, and on their various applications to communications and coding. In addition to being a survey, this monograph also includes various new recent results derived by the authors. The first part of the monograph introduces classical concentration inequalities for martingales, as well as some recent refinements and extensions. The power and versatility of the martingale approach is exemplified in the context of codes defined on graphs and iterative decoding algorithms, as well as codes for wireless communication. The second part of the monograph introduces the entropy method, an information-theoretic technique for deriving concentration inequalities. The basic ingredients of the entropy method are discussed first in the context of logarithmic Sobolev inequalities, which underlie the so-called functional approach to concentration of measure, and then from a complementary information-theoretic viewpoint based on transportation-cost inequalities and probability in metric spaces. Some representative results on concentration for dependent random variables are briefly summarized, with emphasis on their connections to the entropy method. Finally, we discuss several applications of the entropy method to problems in communications and coding, including strong converses, empirical distributions of good channel codes, and an information-theoretic converse for concentration of measure.Comment: Foundations and Trends in Communications and Information Theory, vol. 10, no 1-2, pp. 1-248, 2013. Second edition was published in October 2014. ISBN to printed book: 978-1-60198-906-

    Dissipation of information in channels with input constraints

    Full text link
    One of the basic tenets in information theory, the data processing inequality states that output divergence does not exceed the input divergence for any channel. For channels without input constraints, various estimates on the amount of such contraction are known, Dobrushin's coefficient for the total variation being perhaps the most well-known. This work investigates channels with average input cost constraint. It is found that while the contraction coefficient typically equals one (no contraction), the information nevertheless dissipates. A certain non-linear function, the \emph{Dobrushin curve} of the channel, is proposed to quantify the amount of dissipation. Tools for evaluating the Dobrushin curve of additive-noise channels are developed based on coupling arguments. Some basic applications in stochastic control, uniqueness of Gibbs measures and fundamental limits of noisy circuits are discussed. As an application, it shown that in the chain of nn power-constrained relays and Gaussian channels the end-to-end mutual information and maximal squared correlation decay as Θ(loglognlogn)\Theta(\frac{\log\log n}{\log n}), which is in stark contrast with the exponential decay in chains of discrete channels. Similarly, the behavior of noisy circuits (composed of gates with bounded fan-in) and broadcasting of information on trees (of bounded degree) does not experience threshold behavior in the signal-to-noise ratio (SNR). Namely, unlike the case of discrete channels, the probability of bit error stays bounded away from 121\over 2 regardless of the SNR.Comment: revised; include appendix B on contraction coefficient for mutual information on general alphabet

    Well posedness of Lagrangian flows and continuity equations in metric measure spaces

    Get PDF
    We establish, in a rather general setting, an analogue of DiPerna-Lions theory on well-posedness of flows of ODE's associated to Sobolev vector fields. Key results are a well-posedness result for the continuity equation associated to suitably defined Sobolev vector fields, via a commutator estimate, and an abstract superposition principle in (possibly extended) metric measure spaces, via an embedding into R\mathbb{R}^\infty. When specialized to the setting of Euclidean or infinite dimensional (e.g. Gaussian) spaces, large parts of previously known results are recovered at once. Moreover, the class of RCD(K,){\sf RCD}(K,\infty) metric measure spaces object of extensive recent research fits into our framework. Therefore we provide, for the first time, well-posedness results for ODE's under low regularity assumptions on the velocity and in a nonsmooth context.Comment: Slightly expanded some remarks on the technical assumption (7.11); Journal reference inserte

    Nested Inequalities Among Divergence Measures

    Full text link
    In this paper we have considered a single inequality having 11 known divergence measures. This inequality include measures like: Jeffryes-Kullback-Leiber J-divergence, Jensen-Shannon divergence (Burbea-Rao, 1982), arithmetic-geometric mean divergence (Taneja, 1995), Hellinger discrimination, symmetric chi-square divergence, triangular discrimination, etc. All these measures are well-known in the literature on Information theory and Statistics. This sequence of 11 measures also include measures due to Kumar and Johnson (2005) and Jain and Srivastava (2007). Three measures arising due to some mean divergences also appears in this inequality. Based on non-negative differences arising due to this single inequality of 11 measures, we have put more than 40 divergence measures in nested or sequential form. Idea of reverse inequalities is also introduced
    corecore