4,010 research outputs found
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
Decoding communities in networks
According to a recent information-theoretical proposal, the problem of
defining and identifying communities in networks can be interpreted as a
classical communication task over a noisy channel: memberships of nodes are
information bits erased by the channel, edges and non-edges in the network are
parity bits introduced by the encoder but degraded through the channel, and a
community identification algorithm is a decoder. The interpretation is
perfectly equivalent to the one at the basis of well-known statistical
inference algorithms for community detection. The only difference in the
interpretation is that a noisy channel replaces a stochastic network model.
However, the different perspective gives the opportunity to take advantage of
the rich set of tools of coding theory to generate novel insights on the
problem of community detection. In this paper, we illustrate two main
applications of standard coding-theoretical methods to community detection.
First, we leverage a state-of-the-art decoding technique to generate a family
of quasi-optimal community detection algorithms. Second and more important, we
show that the Shannon's noisy-channel coding theorem can be invoked to
establish a lower bound, here named as decodability bound, for the maximum
amount of noise tolerable by an ideal decoder to achieve perfect detection of
communities. When computed for well-established synthetic benchmarks, the
decodability bound explains accurately the performance achieved by the best
community detection algorithms existing on the market, telling us that only
little room for their improvement is still potentially left.Comment: 9 pages, 5 figures + Appendi
Signal propagation and noisy circuits
The information carried by a signal decays when the signal is corrupted by random noise. This occurs when a message is transmitted over a noisy channel, as well as when a noisy component performs computation. We first study this signal decay in the context of communication and obtain a tight bound on the rate at which information decreases as a signal crosses a noisy channel. We then use this information theoretic result to obtain depth lower bounds in the noisy circuit model of computation defined by von Neumann. In this model, each component fails (produces 1 instead of 0 or vice-versa) independently with a fixed probability, and yet the output of the circuit is required to be correct with high probability. Von Neumann showed how to construct circuits in this model that reliably compute a function and are no more than a constant factor deeper than noiseless circuits for the function. We provide a lower bound on the multiplicative increase in circuit depth necessary for reliable computation, and an upper bound on the maximum level of noise at which reliable computation is possible
- …