387 research outputs found
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
Information-Theoretic Bounds for Multiround Function Computation in Collocated Networks
We study the limits of communication efficiency for function computation in
collocated networks within the framework of multi-terminal block source coding
theory. With the goal of computing a desired function of sources at a sink,
nodes interact with each other through a sequence of error-free, network-wide
broadcasts of finite-rate messages. For any function of independent sources, we
derive a computable characterization of the set of all feasible message coding
rates - the rate region - in terms of single-letter information measures. We
show that when computing symmetric functions of binary sources, the sink will
inevitably learn certain additional information which is not demanded in
computing the function. This conceptual understanding leads to new improved
bounds for the minimum sum-rate. The new bounds are shown to be orderwise
better than those based on cut-sets as the network scales. The scaling law of
the minimum sum-rate is explored for different classes of symmetric functions
and source parameters.Comment: 9 pages. A 5-page version without appendices was submitted to IEEE
International Symposium on Information Theory (ISIT), 2009. This version
contains complete proofs as appendice
Towards a Queueing-Based Framework for In-Network Function Computation
We seek to develop network algorithms for function computation in sensor
networks. Specifically, we want dynamic joint aggregation, routing, and
scheduling algorithms that have analytically provable performance benefits due
to in-network computation as compared to simple data forwarding. To this end,
we define a class of functions, the Fully-Multiplexible functions, which
includes several functions such as parity, MAX, and k th -order statistics. For
such functions we exactly characterize the maximum achievable refresh rate of
the network in terms of an underlying graph primitive, the min-mincut. In
acyclic wireline networks, we show that the maximum refresh rate is achievable
by a simple algorithm that is dynamic, distributed, and only dependent on local
information. In the case of wireless networks, we provide a MaxWeight-like
algorithm with dynamic flow splitting, which is shown to be throughput-optimal
Networked Computing in Wireless Sensor Networks for Structural Health Monitoring
This paper studies the problem of distributed computation over a network of
wireless sensors. While this problem applies to many emerging applications, to
keep our discussion concrete we will focus on sensor networks used for
structural health monitoring. Within this context, the heaviest computation is
to determine the singular value decomposition (SVD) to extract mode shapes
(eigenvectors) of a structure. Compared to collecting raw vibration data and
performing SVD at a central location, computing SVD within the network can
result in significantly lower energy consumption and delay. Using recent
results on decomposing SVD, a well-known centralized operation, into
components, we seek to determine a near-optimal communication structure that
enables the distribution of this computation and the reassembly of the final
results, with the objective of minimizing energy consumption subject to a
computational delay constraint. We show that this reduces to a generalized
clustering problem; a cluster forms a unit on which a component of the overall
computation is performed. We establish that this problem is NP-hard. By
relaxing the delay constraint, we derive a lower bound to this problem. We then
propose an integer linear program (ILP) to solve the constrained problem
exactly as well as an approximate algorithm with a proven approximation ratio.
We further present a distributed version of the approximate algorithm. We
present both simulation and experimentation results to demonstrate the
effectiveness of these algorithms
Communication Cost for Updating Linear Functions when Message Updates are Sparse: Connections to Maximally Recoverable Codes
We consider a communication problem in which an update of the source message
needs to be conveyed to one or more distant receivers that are interested in
maintaining specific linear functions of the source message. The setting is one
in which the updates are sparse in nature, and where neither the source nor the
receiver(s) is aware of the exact {\em difference vector}, but only know the
amount of sparsity that is present in the difference-vector. Under this
setting, we are interested in devising linear encoding and decoding schemes
that minimize the communication cost involved. We show that the optimal
solution to this problem is closely related to the notion of maximally
recoverable codes (MRCs), which were originally introduced in the context of
coding for storage systems. In the context of storage, MRCs guarantee optimal
erasure protection when the system is partially constrained to have local
parity relations among the storage nodes. In our problem, we show that optimal
solutions exist if and only if MRCs of certain kind (identified by the desired
linear functions) exist. We consider point-to-point and broadcast versions of
the problem, and identify connections to MRCs under both these settings. For
the point-to-point setting, we show that our linear-encoder based achievable
scheme is optimal even when non-linear encoding is permitted. The theory is
illustrated in the context of updating erasure coded storage nodes. We present
examples based on modern storage codes such as the minimum bandwidth
regenerating codes.Comment: To Appear in IEEE Transactions on Information Theor
- …