206 research outputs found
Network Coding for Computing: Cut-Set Bounds
The following \textit{network computing} problem is considered. Source nodes
in a directed acyclic network generate independent messages and a single
receiver node computes a target function of the messages. The objective is
to maximize the average number of times can be computed per network usage,
i.e., the ``computing capacity''. The \textit{network coding} problem for a
single-receiver network is a special case of the network computing problem in
which all of the source messages must be reproduced at the receiver. For
network coding with a single receiver, routing is known to achieve the capacity
by achieving the network \textit{min-cut} upper bound. We extend the definition
of min-cut to the network computing problem and show that the min-cut is still
an upper bound on the maximum achievable rate and is tight for computing (using
coding) any target function in multi-edge tree networks and for computing
linear target functions in any network. We also study the bound's tightness for
different classes of target functions. In particular, we give a lower bound on
the computing capacity in terms of the Steiner tree packing number and a
different bound for symmetric functions. We also show that for certain networks
and target functions, the computing capacity can be less than an arbitrarily
small fraction of the min-cut bound.Comment: Submitted to the IEEE Transactions on Information Theory (Special
Issue on Facets of Coding Theory: from Algorithms to Networks); Revised on
Aug 9, 201
Zero-Error Coding for Computing with Encoder Side-Information
We study the zero-error source coding problem in which an encoder with Side
Information (SI) transmits source symbols to a decoder. The decoder
has SI and wants to recover where are deterministic. We
exhibit a condition on the source distribution and that we call "pairwise
shared side information", such that the optimal rate has a single-letter
expression. This condition is satisfied if every pair of source symbols "share"
at least one SI symbol for all output of . It has a practical
interpretation, as models a request made by the encoder on an image ,
and corresponds to the type of request. It also has a graph-theoretical
interpretation: under "pairwise shared side information" the characteristic
graph can be written as a disjoint union of OR products. In the case where the
source distribution is full-support, we provide an analytic expression for the
optimal rate. We develop an example under "pairwise shared side information",
and we show that the optimal coding scheme outperforms several strategies from
the literature
Hypergraph-based Source Codes for Function Computation Under Maximal Distortion
This work investigates functional source coding problems with maximal
distortion, motivated by approximate function computation in many modern
applications. The maximal distortion treats imprecise reconstruction of a
function value as good as perfect computation if it deviates less than a
tolerance level, while treating reconstruction that differs by more than that
level as a failure. Using a geometric understanding of the maximal distortion,
we propose a hypergraph-based source coding scheme for function computation
that is constructive in the sense that it gives an explicit procedure for
defining auxiliary random variables. Moreover, we find that the
hypergraph-based coding scheme achieves the optimal rate-distortion function in
the setting of coding for computing with side information and the Berger-Tung
sum-rate inner bound in the setting of distributed source coding for computing.
It also achieves the El Gamal-Cover inner bound for multiple description coding
for computing and is optimal for successive refinement and cascade multiple
description problems for computing. Lastly, the benefit of complexity reduction
of finding a forward test channel is shown for a class of Markov sources
Computation Over Gaussian Networks With Orthogonal Components
Function computation of arbitrarily correlated discrete sources over Gaussian
networks with orthogonal components is studied. Two classes of functions are
considered: the arithmetic sum function and the type function. The arithmetic
sum function in this paper is defined as a set of multiple weighted arithmetic
sums, which includes averaging of the sources and estimating each of the
sources as special cases. The type or frequency histogram function counts the
number of occurrences of each argument, which yields many important statistics
such as mean, variance, maximum, minimum, median, and so on. The proposed
computation coding first abstracts Gaussian networks into the corresponding
modulo sum multiple-access channels via nested lattice codes and linear network
coding and then computes the desired function by using linear Slepian-Wolf
source coding. For orthogonal Gaussian networks (with no broadcast and
multiple-access components), the computation capacity is characterized for a
class of networks. For Gaussian networks with multiple-access components (but
no broadcast), an approximate computation capacity is characterized for a class
of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information
Theor
A Unified Approach for Network Information Theory
In this paper, we take a unified approach for network information theory and
prove a coding theorem, which can recover most of the achievability results in
network information theory that are based on random coding. The final
single-letter expression has a very simple form, which was made possible by
many novel elements such as a unified framework that represents various network
problems in a simple and unified way, a unified coding strategy that consists
of a few basic ingredients but can emulate many known coding techniques if
needed, and new proof techniques beyond the use of standard covering and
packing lemmas. For example, in our framework, sources, channels, states and
side information are treated in a unified way and various constraints such as
cost and distortion constraints are unified as a single joint-typicality
constraint.
Our theorem can be useful in proving many new achievability results easily
and in some cases gives simpler rate expressions than those obtained using
conventional approaches. Furthermore, our unified coding can strictly
outperform existing schemes. For example, we obtain a generalized
decode-compress-amplify-and-forward bound as a simple corollary of our main
theorem and show it strictly outperforms previously known coding schemes. Using
our unified framework, we formally define and characterize three types of
network duality based on channel input-output reversal and network flow
reversal combined with packing-covering duality.Comment: 52 pages, 7 figures, submitted to IEEE Transactions on Information
theory, a shorter version will appear in Proc. IEEE ISIT 201
Infinite-message Interactive Function Computation in Collocated Networks
An interactive function computation problem in a collocated network is
studied in a distributed block source coding framework. With the goal of
computing a desired function at the sink, the source nodes exchange messages
through a sequence of error-free broadcasts. The infinite-message minimum
sum-rate is viewed as a functional of the joint source pmf and is characterized
as the least element in a partially ordered family of functionals having
certain convex-geometric properties. This characterization leads to a family of
lower bounds for the infinite-message minimum sum-rate and a simple optimality
test for any achievable infinite-message sum-rate. An iterative algorithm for
evaluating the infinite-message minimum sum-rate functional is proposed and is
demonstrated through an example of computing the minimum function of three
sources.Comment: 5 pages. 2 figures. This draft has been submitted to IEEE
International Symposium on Information Theory (ISIT) 201
- …