8 research outputs found
Computation Over Gaussian Networks With Orthogonal Components
Function computation of arbitrarily correlated discrete sources over Gaussian
networks with orthogonal components is studied. Two classes of functions are
considered: the arithmetic sum function and the type function. The arithmetic
sum function in this paper is defined as a set of multiple weighted arithmetic
sums, which includes averaging of the sources and estimating each of the
sources as special cases. The type or frequency histogram function counts the
number of occurrences of each argument, which yields many important statistics
such as mean, variance, maximum, minimum, median, and so on. The proposed
computation coding first abstracts Gaussian networks into the corresponding
modulo sum multiple-access channels via nested lattice codes and linear network
coding and then computes the desired function by using linear Slepian-Wolf
source coding. For orthogonal Gaussian networks (with no broadcast and
multiple-access components), the computation capacity is characterized for a
class of networks. For Gaussian networks with multiple-access components (but
no broadcast), an approximate computation capacity is characterized for a class
of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information
Theor
Secure Network Function Computation for Linear Functions -- Part I: Source Security
In this paper, we put forward secure network function computation over a
directed acyclic network. In such a network, a sink node is required to compute
with zero error a target function of which the inputs are generated as source
messages at multiple source nodes, while a wiretapper, who can access any one
but not more than one wiretap set in a given collection of wiretap sets, is not
allowed to obtain any information about a security function of the source
messages. The secure computing capacity for the above model is defined as the
maximum average number of times that the target function can be securely
computed with zero error at the sink node with the given collection of wiretap
sets and security function for one use of the network. The characterization of
this capacity is in general overwhelmingly difficult. In the current paper, we
consider securely computing linear functions with a wiretapper who can
eavesdrop any subset of edges up to a certain size r, referred to as the
security level, with the security function being the identity function. We
first prove an upper bound on the secure computing capacity, which is
applicable to arbitrary network topologies and arbitrary security levels. When
the security level r is equal to 0, our upper bound reduces to the computing
capacity without security consideration. We discover the surprising fact that
for some models, there is no penalty on the secure computing capacity compared
with the computing capacity without security consideration. We further obtain
an equivalent expression of the upper bound by using a graph-theoretic
approach, and accordingly we develop an efficient approach for computing this
bound. Furthermore, we present a construction of linear function-computing
secure network codes and obtain a lower bound on the secure computing capacity
A Distributed Computationally Aware Quantizer Design via Hyper Binning
We design a distributed function aware quantization scheme for distributed
functional compression. We consider correlated sources and and
a destination that seeks the outcome of a continuous function .
We develop a compression scheme called hyper binning in order to quantize
via minimizing entropy of joint source partitioning. Hyper binning is a natural
generalization of Cover's random code construction for the asymptotically
optimal Slepian-Wolf encoding scheme that makes use of orthogonal binning. The
key idea behind this approach is to use linear discriminant analysis in order
to characterize different source feature combinations. This scheme captures the
correlation between the sources and function's structure as a means of
dimensionality reduction. We investigate the performance of hyper binning for
different source distributions, and identify which classes of sources entail
more partitioning to achieve better function approximation. Our approach brings
an information theory perspective to the traditional vector quantization
technique from signal processing