1,743 research outputs found

    Information Bottleneck on General Alphabets

    Full text link
    We prove rigorously a source coding theorem that can probably be considered folklore, a generalization to arbitrary alphabets of a problem motivated by the Information Bottleneck method. For general random variables (Y,X)(Y, X), we show essentially that for some nNn \in \mathbb{N}, a function ff with rate limit logfnR\log|f| \le nR and I(Yn;f(Xn))nSI(Y^n; f(X^n)) \ge nS exists if and only if there is a random variable UU such that the Markov chain YXUY - X - U holds, I(U;X)RI(U; X) \le R and I(U;Y)SI(U; Y) \ge S. The proof relies on the well established discrete case and showcases a technique for lifting discrete coding theorems to arbitrary alphabets.Comment: extended version, presented at ISIT 2018, Vail, C

    "Graph Entropy, Network Coding and Guessing games"

    Get PDF
    We introduce the (private) entropy of a directed graph (in a new network coding sense) as well as a number of related concepts. We show that the entropy of a directed graph is identical to its guessing number and can be bounded from below with the number of vertices minus the size of the graph’s shortest index code. We show that the Network Coding solvability of each specific multiple unicast network is completely determined by the entropy (as well as by the shortest index code) of the directed graph that occur by identifying each source node with each corresponding target node. Shannon’s information inequalities can be used to calculate up- per bounds on a graph’s entropy as well as calculating the size of the minimal index code. Recently, a number of new families of so-called non-shannon-type information inequalities have been discovered. It has been shown that there exist communication networks with a ca- pacity strictly ess than required for solvability, but where this fact cannot be derived using Shannon’s classical information inequalities. Based on this result we show that there exist graphs with an entropy that cannot be calculated using only Shannon’s classical information inequalities, and show that better estimate can be obtained by use of certain non-shannon-type information inequalities

    Capacity of Sum-networks for Different Message Alphabets

    Get PDF
    A sum-network is a directed acyclic network in which all terminal nodes demand the `sum' of the independent information observed at the source nodes. Many characteristics of the well-studied multiple-unicast network communication problem also hold for sum-networks due to a known reduction between instances of these two problems. Our main result is that unlike a multiple unicast network, the coding capacity of a sum-network is dependent on the message alphabet. We demonstrate this using a construction procedure and show that the choice of a message alphabet can reduce the coding capacity of a sum-network from 11 to close to 00

    Nonasymptotic noisy lossy source coding

    Get PDF
    This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common rate-distortion function, as evidenced both by the refined asymptotic analysis (dispersion) and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source.Comment: IEEE Transactions on Information Theory, 201

    Information-Distilling Quantizers

    Full text link
    Let XX and YY be dependent random variables. This paper considers the problem of designing a scalar quantizer for YY to maximize the mutual information between the quantizer's output and XX, and develops fundamental properties and bounds for this form of quantization, which is connected to the log-loss distortion criterion. The main focus is the regime of low I(X;Y)I(X;Y), where it is shown that, if XX is binary, a constant fraction of the mutual information can always be preserved using O(log(1/I(X;Y)))\mathcal{O}(\log(1/I(X;Y))) quantization levels, and there exist distributions for which this many quantization levels are necessary. Furthermore, for larger finite alphabets 2<X<2 < |\mathcal{X}| < \infty, it is established that an η\eta-fraction of the mutual information can be preserved using roughly (log(X/I(X;Y)))η(X1)(\log(| \mathcal{X} | /I(X;Y)))^{\eta\cdot(|\mathcal{X}| - 1)} quantization levels

    Deep Information Networks

    Full text link
    We describe a novel classifier with a tree structure, designed using information theory concepts. This Information Network is made of information nodes, that compress the input data, and multiplexers, that connect two or more input nodes to an output node. Each information node is trained, independently of the others, to minimize a local cost function that minimizes the mutual information between its input and output with the constraint of keeping a given mutual information between its output and the target (information bottleneck). We show that the system is able to provide good results in terms of accuracy, while it shows many advantages in terms of modularity and reduced complexity
    corecore