523,544 research outputs found
Recommended from our members
The Capacity of Linear Computation Broadcast
The two-user computation broadcast problem is introduced as the setting where user 1 wants message W1 and has side information W0 1 , user 2 wants message W2 and has side information W0 2, and (W1;W0 1 ;W2;W0 2) may have arbitrary dependencies. The goal is to minimize the entropy H(S) of the broadcast information S that simultaneously satisfies both users' demands. It is shown that H(S) H(W1jW0 1) + H(W2jW0 2) min I(W1;W2;W0 2jW0 1 ); I(W2;W1;W0 1jW0 2 ) . Furthermore, for the linear computation broadcast problem, where W1;W0 1;W2;W0 2 are comprised of arbitrary linear combinations of a basis set of independent symbols, the bound is shown to be tight
Computation Alignment: Capacity Approximation without Noise Accumulation
Consider several source nodes communicating across a wireless network to a
destination node with the help of several layers of relay nodes. Recent work by
Avestimehr et al. has approximated the capacity of this network up to an
additive gap. The communication scheme achieving this capacity approximation is
based on compress-and-forward, resulting in noise accumulation as the messages
traverse the network. As a consequence, the approximation gap increases
linearly with the network depth.
This paper develops a computation alignment strategy that can approach the
capacity of a class of layered, time-varying wireless relay networks up to an
approximation gap that is independent of the network depth. This strategy is
based on the compute-and-forward framework, which enables relays to decode
deterministic functions of the transmitted messages. Alone, compute-and-forward
is insufficient to approach the capacity as it incurs a penalty for
approximating the wireless channel with complex-valued coefficients by a
channel with integer coefficients. Here, this penalty is circumvented by
carefully matching channel realizations across time slots to create
integer-valued effective channels that are well-suited to compute-and-forward.
Unlike prior constant gap results, the approximation gap obtained in this paper
also depends closely on the fading statistics, which are assumed to be i.i.d.
Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor
Classical capacity of a qubit depolarizing channel with memory
The classical product state capacity of a noisy quantum channel with memory
is investigated. A forgetful noise-memory channel is constructed by Markov
switching between two depolarizing channels which introduces non-Markovian
noise correlations between successive channel uses. The computation of the
capacity is reduced to an entropy computation for a function of a Markov
process. A reformulation in terms of algebraic measures then enables its
calculation. The effects of the hidden-Markovian memory on the capacity are
explored. An increase in noise-correlations is found to increase the capacity
The Capacity of Private Computation
We introduce the problem of private computation, comprised of distributed
and non-colluding servers, independent datasets, and a user who wants to
compute a function of the datasets privately, i.e., without revealing which
function he wants to compute, to any individual server. This private
computation problem is a strict generalization of the private information
retrieval (PIR) problem, obtained by expanding the PIR message set (which
consists of only independent messages) to also include functions of those
messages. The capacity of private computation, , is defined as the maximum
number of bits of the desired function that can be retrieved per bit of total
download from all servers. We characterize the capacity of private computation,
for servers and independent datasets that are replicated at each
server, when the functions to be computed are arbitrary linear combinations of
the datasets. Surprisingly, the capacity,
, matches the capacity of PIR with
servers and messages. Thus, allowing arbitrary linear computations does
not reduce the communication rate compared to pure dataset retrieval. The same
insight is shown to hold even for arbitrary non-linear computations when the
number of datasets
Computation Over Gaussian Networks With Orthogonal Components
Function computation of arbitrarily correlated discrete sources over Gaussian
networks with orthogonal components is studied. Two classes of functions are
considered: the arithmetic sum function and the type function. The arithmetic
sum function in this paper is defined as a set of multiple weighted arithmetic
sums, which includes averaging of the sources and estimating each of the
sources as special cases. The type or frequency histogram function counts the
number of occurrences of each argument, which yields many important statistics
such as mean, variance, maximum, minimum, median, and so on. The proposed
computation coding first abstracts Gaussian networks into the corresponding
modulo sum multiple-access channels via nested lattice codes and linear network
coding and then computes the desired function by using linear Slepian-Wolf
source coding. For orthogonal Gaussian networks (with no broadcast and
multiple-access components), the computation capacity is characterized for a
class of networks. For Gaussian networks with multiple-access components (but
no broadcast), an approximate computation capacity is characterized for a class
of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information
Theor
- …
