1,286,702 research outputs found

    Estimating Mutual Information

    Get PDF
    We present two classes of improved estimators for mutual information M(X,Y)M(X,Y), from samples of random points distributed according to some joint probability density μ(x,y)\mu(x,y). In contrast to conventional estimators based on binnings, they are based on entropy estimates from kk-nearest neighbour distances. This means that they are data efficient (with k=1k=1 we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to non-uniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of k/Nk/N for NN points. Numerically, we find that both families become {\it exact} for independent distributions, i.e. the estimator M^(X,Y)\hat M(X,Y) vanishes (up to statistical fluctuations) if μ(x,y)=μ(x)μ(y)\mu(x,y) = \mu(x) \mu(y). This holds for all tested marginal distributions and for all dimensions of xx and yy. In addition, we give estimators for redundancies between more than 2 random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation.Comment: 16 pages, including 18 figure

    Distribution of Mutual Information

    Full text link
    The mutual information of two random variables i and j with joint probabilities t_ij is commonly used in learning Bayesian nets as well as in many other fields. The chances t_ij are usually estimated by the empirical sampling frequency n_ij/n leading to a point estimate I(n_ij/n) for the mutual information. To answer questions like "is I(n_ij/n) consistent with zero?" or "what is the probability that the true mutual information is much larger than the point estimate?" one has to go beyond the point estimate. In the Bayesian framework one can answer these questions by utilizing a (second order) prior distribution p(t) comprising prior information about t. From the prior p(t) one can compute the posterior p(t|n), from which the distribution p(I|n) of the mutual information can be calculated. We derive reliable and quickly computable approximations for p(I|n). We concentrate on the mean, variance, skewness, and kurtosis, and non-informative priors. For the mean we also give an exact expression. Numerical issues and the range of validity are discussed.Comment: 8 page

    Uncertainty Relation for Mutual Information

    Get PDF
    We postulate the existence of a universal uncertainty relation between the quantum and classical mutual informations between pairs of quantum systems. Specifically, we propose that the sum of the classical mutual information, determined by two mutually unbiased pairs of observables, never exceeds the quantum mutual information. We call this the complementary-quantum correlation (CQC) relation and prove its validity for pure states, for states with one maximally mixed subsystem, and for all states when one measurement is minimally disturbing. We provide results of a Monte Carlo simulation suggesting the CQC relation is generally valid. Importantly, we also show that the CQC relation represents an improvement to an entropic uncertainty principle in the presence of a quantum memory, and that it can be used to verify an achievable secret key rate in the quantum one-time pad cryptographic protocol.Comment: 6 pages, 2 figure

    EMI: Exploration with Mutual Information

    Full text link
    Reinforcement learning algorithms struggle when the reward signal is very sparse. In these cases, naive random exploration methods essentially rely on a random walk to stumble onto a rewarding state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or discriminative modeling of novelty. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show competitive results on challenging locomotion tasks with continuous control and on image-based exploration tasks with discrete actions on Atari. The source code is available at https://github.com/snu-mllab/EMI .Comment: Accepted and to appear at ICML 201

    Lower Bounds on Mutual Information

    Get PDF
    We correct claims about lower bounds on mutual information (MI) between real-valued random variables made in A. Kraskov {\it et al.}, Phys. Rev. E {\bf 69}, 066138 (2004). We show that non-trivial lower bounds on MI in terms of linear correlations depend on the marginal (single variable) distributions. This is so in spite of the invariance of MI under reparametrizations, because linear correlations are not invariant under them. The simplest bounds are obtained for Gaussians, but the most interesting ones for practical purposes are obtained for uniform marginal distributions. The latter can be enforced in general by using the ranks of the individual variables instead of their actual values, in which case one obtains bounds on MI in terms of Spearman correlation coefficients. We show with gene expression data that these bounds are in general non-trivial, and the degree of their (non-)saturation yields valuable insight.Comment: 4 page

    Mutual information challenges entropy bounds

    Full text link
    We consider some formulations of the entropy bounds at the semiclassical level. The entropy S(V) localized in a region V is divergent in quantum field theory (QFT). Instead of it we focus on the mutual information I(V,W)=S(V)+S(W)-S(V\cup W) between two different non-intersecting sets V and W. This is a low energy quantity, independent of the regularization scheme. In addition, the mutual information is bounded above by twice the entropy corresponding to the sets involved. Calculations of I(V,W) in QFT show that the entropy in empty space cannot be renormalized to zero, and must be actually very large. We find that this entropy due to the vacuum fluctuations violates the FMW bound in Minkowski space. The mutual information also gives a precise, cutoff independent meaning to the statement that the number of degrees of freedom increases with the volume in QFT. If the holographic bound holds, this points to the essential non locality of the physical cutoff. Violations of the Bousso bound would require conformal theories and large distances. We speculate that the presence of a small cosmological constant might prevent such a violation.Comment: 10 pages, 2 figures, minor change
    corecore