275,168 research outputs found
Energy Harvesting Wireless Sensor Networks: Delay Analysis Considering Energy Costs of Sensing and Transmission
Energy harvesting (EH) provides a means of greatly enhancing the lifetime of
wireless sensor nodes. However, the randomness inherent in the EH process may
cause significant delay for performing sensing operation and transmitting the
sensed information to the sink. Unlike most existing studies on the delay
performance of EH sensor networks, where only the energy consumption of
transmission is considered, we consider the energy costs of both sensing and
transmission. Specifically, we consider an EH sensor that monitors some status
environmental property and adopts a harvest-then-use protocol to perform
sensing and transmission. To comprehensively study the delay performance, we
consider two complementary metrics and analytically derive their statistics:
(i) update age - measuring the time taken from when information is obtained by
the sensor to when the sensed information is successfully transmitted to the
sink, i.e., how timely the updated information at the sink is, and (ii) update
cycle - measuring the time duration between two consecutive successful
transmissions, i.e., how frequently the information at the sink is updated. Our
results show that the consideration of sensing energy cost leads to an
important tradeoff between the two metrics: more frequent updates result in
less timely information available at the sink.Comment: submitted for possible journal publicatio
Superprocesses as models for information dissemination in the Future Internet
Future Internet will be composed by a tremendous number of potentially
interconnected people and devices, offering a variety of services, applications
and communication opportunities. In particular, short-range wireless
communications, which are available on almost all portable devices, will enable
the formation of the largest cloud of interconnected, smart computing devices
mankind has ever dreamed about: the Proximate Internet. In this paper, we
consider superprocesses, more specifically super Brownian motion, as a suitable
mathematical model to analyse a basic problem of information dissemination
arising in the context of Proximate Internet. The proposed model provides a
promising analytical framework to both study theoretical properties related to
the information dissemination process and to devise efficient and reliable
simulation schemes for very large systems
Information Transmission under Random Emission Constraints
We model the transmission of a message on the complete graph with n vertices
and limited resources. The vertices of the graph represent servers that may
broadcast the message at random. Each server has a random emission capital that
decreases at each emission. Quantities of interest are the number of servers
that receive the information before the capital of all the informed servers is
exhausted and the exhaustion time. We establish limit theorems (law of large
numbers, central limit theorem and large deviation principle), as n tends to
infinity, for the proportion of visited vertices before exhaustion and for the
total duration. The analysis relies on a construction of the transmission
procedure as a dynamical selection of successful nodes in a Galton-Watson tree
with respect to the success epochs of the coupon collector problem
Uncovering the Temporal Dynamics of Diffusion Networks
Time plays an essential role in the diffusion of information, influence and
disease over networks. In many cases we only observe when a node copies
information, makes a decision or becomes infected -- but the connectivity,
transmission rates between nodes and transmission sources are unknown.
Inferring the underlying dynamics is of outstanding interest since it enables
forecasting, influencing and retarding infections, broadly construed. To this
end, we model diffusion processes as discrete networks of continuous temporal
processes occurring at different rates. Given cascade data -- observed
infection times of nodes -- we infer the edges of the global diffusion network
and estimate the transmission rates of each edge that best explain the observed
data. The optimization problem is convex. The model naturally (without
heuristics) imposes sparse solutions and requires no parameter tuning. The
problem decouples into a collection of independent smaller problems, thus
scaling easily to networks on the order of hundreds of thousands of nodes.
Experiments on real and synthetic data show that our algorithm both recovers
the edges of diffusion networks and accurately estimates their transmission
rates from cascade data.Comment: To appear in the 28th International Conference on Machine Learning
(ICML), 2011. Website: http://www.stanford.edu/~manuelgr/netrate
Networked Slepian-Wolf: theory, algorithms, and scaling laws
Consider a set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources. The minimization of cost functions which are the product of a function of the rate and a function of the path weight is considered, for both the data-gathering scenario, which is relevant in sensor networks, and general traffic matrices, relevant for general networks. The minimization is achieved by jointly optimizing a) the transmission structure, which is shown to consist in general of a superposition of trees, and b) the rate allocation across the source nodes, which is done by Slepian-Wolf coding. The overall minimization can be achieved in two concatenated steps. First, the optimal transmission structure is found, which in general amounts to finding a Steiner tree, and second, the optimal rate allocation is obtained by solving an optimization problem with cost weights determined by the given optimal transmission structure, and with linear constraints given by the Slepian-Wolf rate region. For the case of data gathering, the optimal transmission structure is fully characterized and a closed-form solution for the optimal rate allocation is provided. For the general case of an arbitrary traffic matrix, the problem of finding the optimal transmission structure is NP-complete. For large networks, in some simplified scenarios, the total costs associated with Slepian-Wolf coding and explicit communication (conditional encoding based on explicitly communicated side information) are compared. Finally, the design of decentralized algorithms for the optimal rate allocation is analyzed
On the Performance of Short Block Codes over Finite-State Channels in the Rare-Transition Regime
As the mobile application landscape expands, wireless networks are tasked
with supporting different connection profiles, including real-time traffic and
delay-sensitive communications. Among many ensuing engineering challenges is
the need to better understand the fundamental limits of forward error
correction in non-asymptotic regimes. This article characterizes the
performance of random block codes over finite-state channels and evaluates
their queueing performance under maximum-likelihood decoding. In particular,
classical results from information theory are revisited in the context of
channels with rare transitions, and bounds on the probabilities of decoding
failure are derived for random codes. This creates an analysis framework where
channel dependencies within and across codewords are preserved. Such results
are subsequently integrated into a queueing problem formulation. For instance,
it is shown that, for random coding on the Gilbert-Elliott channel, the
performance analysis based on upper bounds on error probability provides very
good estimates of system performance and optimum code parameters. Overall, this
study offers new insights about the impact of channel correlation on the
performance of delay-aware, point-to-point communication links. It also
provides novel guidelines on how to select code rates and block lengths for
real-time traffic over wireless communication infrastructures
- âŠ