1,352 research outputs found
A Rate-Distortion Approach to Index Coding
We approach index coding as a special case of rate-distortion with multiple
receivers, each with some side information about the source. Specifically,
using techniques developed for the rate-distortion problem, we provide two
upper bounds and one lower bound on the optimal index coding rate. The upper
bounds involve specific choices of the auxiliary random variables in the best
existing scheme for the rate-distortion problem. The lower bound is based on a
new lower bound for the general rate-distortion problem. The bounds are shown
to coincide for a number of (groupcast) index coding instances, including all
instances for which the number of decoders does not exceed three.Comment: Substantially extended version. Submitted to IEEE Transactions on
Information Theor
Equivalent models for multi-terminal channels
The recently introduced network equivalence results are used to create bit-pipe models that can replace multi-terminal channels within a discrete memoryless network. The goal is to create a set of simple “components” or “blocks” that can be substituted for the channel in such a way that the resulting network is capable of emulating the operation of the original one. We develop general upper and lower bounding models for the multiple access channel and for a class of broadcast channels. These bounds are sharp in the sense that there exists networks where the original channel can achieve the maximum sum rate permissible through the upper or lower bounding models. This approach provides a simple method for analyzing the capacity of large networks, which we illustrate with an example
A Theory of Network Equivalence, Parts I and II
A family of equivalence tools for bounding network capacities is introduced.
Part I treats networks of point-to-point channels. The main result is roughly
as follows. Given a network of noisy, independent, memoryless point-to-point
channels, a collection of communication demands can be met on the given network
if and only if it can be met on another network where each noisy channel is
replaced by a noiseless bit pipe with throughput equal to the noisy channel
capacity. This result was known previously for the case of a single-source
multicast demand. The result given here treats general demands -- including,
for example, multiple unicast demands -- and applies even when the achievable
rate region for the corresponding demands is unknown in the noiseless network.
In part II, definitions of upper and lower bounding channel models for general
channels are introduced. By these definitions, a collection of communication
demands can be met on a network of independent channels if it can be met on a
network where each channel is replaced by its lower bounding model andonly if
it can be met on a network where each channel is replaced by its upper bounding
model. This work derives general conditions under which a network of noiseless
bit pipes is an upper or lower bounding model for a multiterminal channel.
Example upper and lower bounding models for broadcast, multiple access, and
interference channels are given. It is then shown that bounding the difference
between the upper and lower bounding models for a given channel yields bounds
on the accuracy of network capacity bounds derived using those models. By
bounding the capacity of a network of independent noisy channels by the network
coding capacity of a network of noiseless bit pipes, this approach represents
one step towards the goal of building computational tools for bounding network
capacities.Comment: 91 pages, 18 figures. Submitted to the IEEE Transactions on
Information Theory on April 14, 2010. Draft
Information Flow in Computational Systems
We develop a theoretical framework for defining and identifying flows of
information in computational systems. Here, a computational system is assumed
to be a directed graph, with "clocked" nodes that send transmissions to each
other along the edges of the graph at discrete points in time. We are
interested in a definition that captures the dynamic flow of information about
a specific message, and which guarantees an unbroken "information path" between
appropriately defined inputs and outputs in the directed graph. Prior measures,
including those based on Granger Causality and Directed Information, fail to
provide clear assumptions and guarantees about when they correctly reflect
information flow about a message. We take a systematic approach---iterating
through candidate definitions and counterexamples---to arrive at a definition
for information flow that is based on conditional mutual information, and which
satisfies desirable properties, including the existence of information paths.
Finally, we describe how information flow might be detected in a noiseless
setting, and provide an algorithm to identify information paths on the
time-unrolled graph of a computational system.Comment: Significantly revised version which was accepted for publication at
the IEEE Transactions on Information Theor
- …