46,692 research outputs found
Computation Over Gaussian Networks With Orthogonal Components
Function computation of arbitrarily correlated discrete sources over Gaussian
networks with orthogonal components is studied. Two classes of functions are
considered: the arithmetic sum function and the type function. The arithmetic
sum function in this paper is defined as a set of multiple weighted arithmetic
sums, which includes averaging of the sources and estimating each of the
sources as special cases. The type or frequency histogram function counts the
number of occurrences of each argument, which yields many important statistics
such as mean, variance, maximum, minimum, median, and so on. The proposed
computation coding first abstracts Gaussian networks into the corresponding
modulo sum multiple-access channels via nested lattice codes and linear network
coding and then computes the desired function by using linear Slepian-Wolf
source coding. For orthogonal Gaussian networks (with no broadcast and
multiple-access components), the computation capacity is characterized for a
class of networks. For Gaussian networks with multiple-access components (but
no broadcast), an approximate computation capacity is characterized for a class
of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information
Theor
Analyzing the Performance of Multilayer Neural Networks for Object Recognition
In the last two years, convolutional neural networks (CNNs) have achieved an
impressive suite of results on standard recognition datasets and tasks.
CNN-based features seem poised to quickly replace engineered representations,
such as SIFT and HOG. However, compared to SIFT and HOG, we understand much
less about the nature of the features learned by large CNNs. In this paper, we
experimentally probe several aspects of CNN feature learning in an attempt to
help practitioners gain useful, evidence-backed intuitions about how to apply
CNNs to computer vision problems.Comment: Published in European Conference on Computer Vision 2014 (ECCV-2014
Computation in Multicast Networks: Function Alignment and Converse Theorems
The classical problem in network coding theory considers communication over
multicast networks. Multiple transmitters send independent messages to multiple
receivers which decode the same set of messages. In this work, computation over
multicast networks is considered: each receiver decodes an identical function
of the original messages. For a countably infinite class of two-transmitter
two-receiver single-hop linear deterministic networks, the computing capacity
is characterized for a linear function (modulo-2 sum) of Bernoulli sources.
Inspired by the geometric concept of interference alignment in networks, a new
achievable coding scheme called function alignment is introduced. A new
converse theorem is established that is tighter than cut-set based and
genie-aided bounds. Computation (vs. communication) over multicast networks
requires additional analysis to account for multiple receivers sharing a
network's computational resources. We also develop a network decomposition
theorem which identifies elementary parallel subnetworks that can constitute an
original network without loss of optimality. The decomposition theorem provides
a conceptually-simpler algebraic proof of achievability that generalizes to
-transmitter -receiver networks.Comment: to appear in the IEEE Transactions on Information Theor
A Relation Between Network Computation and Functional Index Coding Problems
In contrast to the network coding problem wherein the sinks in a network
demand subsets of the source messages, in a network computation problem the
sinks demand functions of the source messages. Similarly, in the functional
index coding problem, the side information and demands of the clients include
disjoint sets of functions of the information messages held by the transmitter
instead of disjoint subsets of the messages, as is the case in the conventional
index coding problem. It is known that any network coding problem can be
transformed into an index coding problem and vice versa. In this work, we
establish a similar relationship between network computation problems and a
class of functional index coding problems, viz., those in which only the
demands of the clients include functions of messages. We show that any network
computation problem can be converted into a functional index coding problem
wherein some clients demand functions of messages and vice versa. We prove that
a solution for a network computation problem exists if and only if a functional
index code (of a specific length determined by the network computation problem)
for a suitably constructed functional index coding problem exists. And, that a
functional index coding problem admits a solution of a specified length if and
only if a suitably constructed network computation problem admits a solution.Comment: 3 figures, 7 tables and 9 page
- …