25,764 research outputs found

    Generalized Information Bottleneck for Gaussian Variables

    Full text link
    The information bottleneck (IB) method offers an attractive framework for understanding representation learning, however its applications are often limited by its computational intractability. Analytical characterization of the IB method is not only of practical interest, but it can also lead to new insights into learning phenomena. Here we consider a generalized IB problem, in which the mutual information in the original IB method is replaced by correlation measures based on Renyi and Jeffreys divergences. We derive an exact analytical IB solution for the case of Gaussian correlated variables. Our analysis reveals a series of structural transitions, similar to those previously observed in the original IB case. We find further that although solving the original, Renyi and Jeffreys IB problems yields different representations in general, the structural transitions occur at the same critical tradeoff parameters, and the Renyi and Jeffreys IB solutions perform well under the original IB objective. Our results suggest that formulating the IB method with alternative correlation measures could offer a strategy for obtaining an approximate solution to the original IB problem.Comment: 7 pages, 3 figure

    Nonlinear Information Bottleneck

    Full text link
    Information bottleneck (IB) is a technique for extracting information in one random variable XX that is relevant for predicting another random variable YY. IB works by encoding XX in a compressed "bottleneck" random variable MM from which YY can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete XX and YY with small state spaces, and continuous XX and YY with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous XX and YY, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed "variational IB" method on several real-world datasets

    Asymptotic Sum-Capacity of Random Gaussian Interference Networks Using Interference Alignment

    Full text link
    We consider a dense n-user Gaussian interference network formed by paired transmitters and receivers placed independently at random in Euclidean space. Under natural conditions on the node position distributions and signal attenuation, we prove convergence in probability of the average per-user capacity C_Sigma/n to 1/2 E log(1 + 2SNR). The achievability result follows directly from results based on an interference alignment scheme presented in recent work of Nazer et al. Our main contribution comes through the converse result, motivated by ideas of `bottleneck links' developed in recent work of Jafar. An information theoretic argument gives a capacity bound on such bottleneck links, and probabilistic counting arguments show there are sufficiently many such links to tightly bound the sum-capacity of the whole network.Comment: 5 pages; to appear at ISIT 201
    • …
    corecore