6,296 research outputs found

    Sumset and Inverse Sumset Inequalities for Differential Entropy and Mutual Information

    Get PDF
    The sumset and inverse sumset theories of Freiman, Pl\"{u}nnecke and Ruzsa, give bounds connecting the cardinality of the sumset A+B={a+b  ;  a∈A, b∈B}A+B=\{a+b\;;\;a\in A,\,b\in B\} of two discrete sets A,BA,B, to the cardinalities (or the finer structure) of the original sets A,BA,B. For example, the sum-difference bound of Ruzsa states that, ∣A+B∣ ∣A∣ ∣B∣≤∣A−B∣3|A+B|\,|A|\,|B|\leq|A-B|^3, where the difference set A−B={a−b  ;  a∈A, b∈B}A-B= \{a-b\;;\;a\in A,\,b\in B\}. Interpreting the differential entropy h(X)h(X) of a continuous random variable XX as (the logarithm of) the size of the effective support of XX, the main contribution of this paper is a series of natural information-theoretic analogs for these results. For example, the Ruzsa sum-difference bound becomes the new inequality, h(X+Y)+h(X)+h(Y)≤3h(X−Y)h(X+Y)+h(X)+h(Y)\leq 3h(X-Y), for any pair of independent continuous random variables XX and YY. Our results include differential-entropy versions of Ruzsa's triangle inequality, the Pl\"{u}nnecke-Ruzsa inequality, and the Balog-Szemer\'{e}di-Gowers lemma. Also we give a differential entropy version of the Freiman-Green-Ruzsa inverse-sumset theorem, which can be seen as a quantitative converse to the entropy power inequality. Versions of most of these results for the discrete entropy H(X)H(X) were recently proved by Tao, relying heavily on a strong, functional form of the submodularity property of H(X)H(X). Since differential entropy is {\em not} functionally submodular, in the continuous case many of the corresponding discrete proofs fail, in many cases requiring substantially new proof strategies. We find that the basic property that naturally replaces the discrete functional submodularity, is the data processing property of mutual information.Comment: 23 page

    A Simple Proof of the Entropy-Power Inequality via Properties of Mutual Information

    Full text link
    While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, Shannon's entropy power inequality (EPI) seems to be an exception: available information theoretic proofs of the EPI hinge on integral representations of differential entropy using either Fisher's information (FI) or minimum mean-square error (MMSE). In this paper, we first present a unified view of proofs via FI and MMSE, showing that they are essentially dual versions of the same proof, and then fill the gap by providing a new, simple proof of the EPI, which is solely based on the properties of mutual information and sidesteps both FI or MMSE representations.Comment: 5 pages, accepted for presentation at the IEEE International Symposium on Information Theory 200

    The conditional entropy power inequality for quantum additive noise channels

    Get PDF
    We prove the quantum conditional Entropy Power Inequality for quantum additive noise channels. This inequality lower bounds the quantum conditional entropy of the output of an additive noise channel in terms of the quantum conditional entropies of the input state and the noise when they are conditionally independent given the memory. We also show that this conditional Entropy Power Inequality is optimal in the sense that we can achieve equality asymptotically by choosing a suitable sequence of Gaussian input states. We apply the conditional Entropy Power Inequality to find an array of information-theoretic inequalities for conditional entropies which are the analogues of inequalities which have already been established in the unconditioned setting. Furthermore, we give a simple proof of the convergence rate of the quantum Ornstein-Uhlenbeck semigroup based on Entropy Power Inequalities.Comment: 26 pages; updated to match published versio

    Information Theoretic Proofs of Entropy Power Inequalities

    Full text link
    While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon's entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using either Fisher information or minimum mean-square error (MMSE), which are derived from de Bruijn's identity. In this paper, we first present an unified view of these proofs, showing that they share two essential ingredients: 1) a data processing argument applied to a covariance-preserving linear transformation; 2) an integration over a path of a continuous Gaussian perturbation. Using these ingredients, we develop a new and brief proof of the EPI through a mutual information inequality, which replaces Stam and Blachman's Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai and Verd\'u used in earlier proofs. The result has the advantage of being very simple in that it relies only on the basic properties of mutual information. These ideas are then generalized to various extended versions of the EPI: Zamir and Feder's generalized EPI for linear transformations of the random variables, Takano and Johnson's EPI for dependent variables, Liu and Viswanath's covariance-constrained EPI, and Costa's concavity inequality for the entropy power.Comment: submitted for publication in the IEEE Transactions on Information Theory, revised versio
    • …
    corecore