4 research outputs found
Impromptu Deployment of Wireless Relay Networks: Experiences Along a Forest Trail
We are motivated by the problem of impromptu or as- you-go deployment of
wireless sensor networks. As an application example, a person, starting from a
sink node, walks along a forest trail, makes link quality measurements (with
the previously placed nodes) at equally spaced locations, and deploys relays at
some of these locations, so as to connect a sensor placed at some a priori
unknown point on the trail with the sink node. In this paper, we report our
experimental experiences with some as-you-go deployment algorithms. Two
algorithms are based on Markov decision process (MDP) formulations; these
require a radio propagation model. We also study purely measurement based
strategies: one heuristic that is motivated by our MDP formulations, one
asymptotically optimal learning algorithm, and one inspired by a popular
heuristic. We extract a statistical model of the propagation along a forest
trail from raw measurement data, implement the algorithms experimentally in the
forest, and compare them. The results provide useful insights regarding the
choice of the deployment algorithm and its parameters, and also demonstrate the
necessity of a proper theoretical formulation.Comment: 7 pages, accepted in IEEE MASS 201
Sequential Decision Algorithms for Measurement-Based Impromptu Deployment of a Wireless Relay Network along a Line
We are motivated by the need, in some applications, for impromptu or
as-you-go deployment of wireless sensor networks. A person walks along a line,
starting from a sink node (e.g., a base-station), and proceeds towards a source
node (e.g., a sensor) which is at an a priori unknown location. At equally
spaced locations, he makes link quality measurements to the previous relay, and
deploys relays at some of these locations, with the aim to connect the source
to the sink by a multihop wireless path. In this paper, we consider two
approaches for impromptu deployment: (i) the deployment agent can only move
forward (which we call a pure as-you-go approach), and (ii) the deployment
agent can make measurements over several consecutive steps before selecting a
placement location among them (which we call an explore-forward approach). We
consider a light traffic regime, and formulate the problem as a Markov decision
process, where the trade-off is among the power used by the nodes, the outage
probabilities in the links, and the number of relays placed per unit distance.
We obtain the structures of the optimal policies for the pure as-you-go
approach as well as for the explore-forward approach. We also consider natural
heuristic algorithms, for comparison. Numerical examples show that the
explore-forward approach significantly outperforms the pure as-you-go approach.
Next, we propose two learning algorithms for the explore-forward approach,
based on Stochastic Approximation, which asymptotically converge to the set of
optimal policies, without using any knowledge of the radio propagation model.
We demonstrate numerically that the learning algorithms can converge (as
deployment progresses) to the set of optimal policies reasonably fast and,
hence, can be practical, model-free algorithms for deployment over large
regions.Comment: 29 pages. arXiv admin note: text overlap with arXiv:1308.068
Measurement Based Impromptu Deployment of a Multi-Hop Wireless Relay Network
International audienceWe study the problem of optimal sequential ("as-you-go") deployment of wireless relay nodes, as a person walks along a line of random length (with a known distribution). The objective is to create an impromptu multihop wireless network for connecting a packet source to be placed at the end of the line with a sink node located at the starting point, to operate in the light traffic regime. In walking from the sink towards the source, at every step, measurements yield the transmit powers required to establish links to one or more previously placed nodes. Based on these measurements, at every step, a decision is made to place a relay node, the overall system objective being to minimize a linear combination of the expected sum power (or the expected maximum power) required to deliver a packet from the source to the sink node and the expected number of relay nodes deployed. For each of these two objectives, two different relay selection strategies are considered: (i) each relay communicates with the sink via its immediate previous relay, (ii) the communication path can skip some of the deployed relays. With appropriate modeling assumptions, we formulate each of these problems as a Markov decision process (MDP). We provide the optimal policy structures for all these cases, and provide illustrations of the policies and their performance, via numerical results, for some typical parameters