1,743 research outputs found
Information Bottleneck on General Alphabets
We prove rigorously a source coding theorem that can probably be considered
folklore, a generalization to arbitrary alphabets of a problem motivated by the
Information Bottleneck method. For general random variables , we show
essentially that for some , a function with rate limit
and exists if and only if there is a
random variable such that the Markov chain holds, and . The proof relies on the well established discrete case
and showcases a technique for lifting discrete coding theorems to arbitrary
alphabets.Comment: extended version, presented at ISIT 2018, Vail, C
"Graph Entropy, Network Coding and Guessing games"
We introduce the (private) entropy of a directed graph (in a new network coding sense) as well as a number of related concepts. We show that the entropy of a directed graph is identical to its guessing number and can be bounded from below with the number of vertices minus the size of the graph’s shortest index code. We show that the Network Coding solvability of each specific multiple unicast network is completely determined by the entropy (as well as by the shortest index code) of the directed graph that occur by identifying each source node with each corresponding target node. Shannon’s information inequalities can be used to calculate up- per bounds on a graph’s entropy as well as calculating the size of the minimal index code. Recently, a number of new families of so-called non-shannon-type information inequalities have been discovered. It has been shown that there exist communication networks with a ca- pacity strictly ess than required for solvability, but where this fact cannot be derived using Shannon’s classical information inequalities. Based on this result we show that there exist graphs with an entropy that cannot be calculated using only Shannon’s classical information inequalities, and show that better estimate can be obtained by use of certain non-shannon-type information inequalities
Capacity of Sum-networks for Different Message Alphabets
A sum-network is a directed acyclic network in which all terminal nodes
demand the `sum' of the independent information observed at the source nodes.
Many characteristics of the well-studied multiple-unicast network communication
problem also hold for sum-networks due to a known reduction between instances
of these two problems. Our main result is that unlike a multiple unicast
network, the coding capacity of a sum-network is dependent on the message
alphabet. We demonstrate this using a construction procedure and show that the
choice of a message alphabet can reduce the coding capacity of a sum-network
from to close to
Nonasymptotic noisy lossy source coding
This paper shows new general nonasymptotic achievability and converse bounds
and performs their dispersion analysis for the lossy compression problem in
which the compressor observes the source through a noisy channel. While this
problem is asymptotically equivalent to a noiseless lossy source coding problem
with a modified distortion function, nonasymptotically there is a noticeable
gap in how fast their minimum achievable coding rates approach the common
rate-distortion function, as evidenced both by the refined asymptotic analysis
(dispersion) and the numerical results. The size of the gap between the
dispersions of the noisy problem and the asymptotically equivalent noiseless
problem depends on the stochastic variability of the channel through which the
compressor observes the source.Comment: IEEE Transactions on Information Theory, 201
Information-Distilling Quantizers
Let and be dependent random variables. This paper considers the
problem of designing a scalar quantizer for to maximize the mutual
information between the quantizer's output and , and develops fundamental
properties and bounds for this form of quantization, which is connected to the
log-loss distortion criterion. The main focus is the regime of low ,
where it is shown that, if is binary, a constant fraction of the mutual
information can always be preserved using
quantization levels, and there exist distributions for which this many
quantization levels are necessary. Furthermore, for larger finite alphabets , it is established that an -fraction of the
mutual information can be preserved using roughly quantization levels
Deep Information Networks
We describe a novel classifier with a tree structure, designed using
information theory concepts. This Information Network is made of information
nodes, that compress the input data, and multiplexers, that connect two or more
input nodes to an output node. Each information node is trained, independently
of the others, to minimize a local cost function that minimizes the mutual
information between its input and output with the constraint of keeping a given
mutual information between its output and the target (information bottleneck).
We show that the system is able to provide good results in terms of accuracy,
while it shows many advantages in terms of modularity and reduced complexity
- …