11 research outputs found

    Impromptu Deployment of Wireless Relay Networks: Experiences Along a Forest Trail

    Full text link
    We are motivated by the problem of impromptu or as- you-go deployment of wireless sensor networks. As an application example, a person, starting from a sink node, walks along a forest trail, makes link quality measurements (with the previously placed nodes) at equally spaced locations, and deploys relays at some of these locations, so as to connect a sensor placed at some a priori unknown point on the trail with the sink node. In this paper, we report our experimental experiences with some as-you-go deployment algorithms. Two algorithms are based on Markov decision process (MDP) formulations; these require a radio propagation model. We also study purely measurement based strategies: one heuristic that is motivated by our MDP formulations, one asymptotically optimal learning algorithm, and one inspired by a popular heuristic. We extract a statistical model of the propagation along a forest trail from raw measurement data, implement the algorithms experimentally in the forest, and compare them. The results provide useful insights regarding the choice of the deployment algorithm and its parameters, and also demonstrate the necessity of a proper theoretical formulation.Comment: 7 pages, accepted in IEEE MASS 201

    Sequential Decision Algorithms for Measurement-Based Impromptu Deployment of a Wireless Relay Network along a Line

    Full text link
    We are motivated by the need, in some applications, for impromptu or as-you-go deployment of wireless sensor networks. A person walks along a line, starting from a sink node (e.g., a base-station), and proceeds towards a source node (e.g., a sensor) which is at an a priori unknown location. At equally spaced locations, he makes link quality measurements to the previous relay, and deploys relays at some of these locations, with the aim to connect the source to the sink by a multihop wireless path. In this paper, we consider two approaches for impromptu deployment: (i) the deployment agent can only move forward (which we call a pure as-you-go approach), and (ii) the deployment agent can make measurements over several consecutive steps before selecting a placement location among them (which we call an explore-forward approach). We consider a light traffic regime, and formulate the problem as a Markov decision process, where the trade-off is among the power used by the nodes, the outage probabilities in the links, and the number of relays placed per unit distance. We obtain the structures of the optimal policies for the pure as-you-go approach as well as for the explore-forward approach. We also consider natural heuristic algorithms, for comparison. Numerical examples show that the explore-forward approach significantly outperforms the pure as-you-go approach. Next, we propose two learning algorithms for the explore-forward approach, based on Stochastic Approximation, which asymptotically converge to the set of optimal policies, without using any knowledge of the radio propagation model. We demonstrate numerically that the learning algorithms can converge (as deployment progresses) to the set of optimal policies reasonably fast and, hence, can be practical, model-free algorithms for deployment over large regions.Comment: 29 pages. arXiv admin note: text overlap with arXiv:1308.068

    Decision-centric resource-efficient semantic information management

    Get PDF
    For the past few decades, we have put significant efforts in building tools that extend our senses and enhance our perceptions, be it the traditional sensor networks, or the more recent Internet-of-Things. With such systems, the lasting strives for efficiency and effectiveness have driven research forces in the community to keep seeking smarter ways to manage bigger data with lower resource consumptions, especially resource-poor environments such as post-disaster response and recovery scenarios. In this dissertation, we base ourselves on the state-of-the-arts studies, and build a set of techniques as well as a holistic information management system that not only account for data level characteristics, but, more importantly, take advantage of the higher information semantic level features as well as the even higher level decision logic structures in achieving effective and efficient data acquisition and dissemination. We first introduce a data prioritization algorithm that accounts for overlaps among data sources to maximize information delivery. We then build a set of techniques that directly optimize the efficiency of decision making, as opposed to only focusing on traditional, lower-level communication optimizations, such as total network throughput or average latency. In developing these algorithms, we view decisions as choices of a course of action, based on several logical predicates. Making a decision is thus reduced to evaluating a Boolean expression on these predicates; for example, "if it is raining, I will carry an umbrella." To evaluate a predicate, evidence is needed (e.g., a picture of the weather). Data objects, retrieved from sensors, supply the needed evidence for predicate evaluation. By using a decision-making model, our retrieval algorithms are able to take into consideration historical/domain knowledge, logical dependencies among data items, as well as information freshness decays, in order to prioritize data transmission to minimize overhead of transferring information needed by a variety of decision makers, while at the same time coping with query level timeliness requirements, environment dynamics, and system resource limitations. Finally we present the architecture for a distributed semantic-aware information management system, which we call Athena. We discuss its key design choices, and how we incorporate various techniques, such as interest book-keeping and label sharing, to improve information dissemination efficiency in realistic scenarios. For all the components as well as the whole Athena system, we will discuss our implementations and evaluations under realistic settings. Results show that our techniques improve the efficiency of information gathering and delivery in support of post-disaster situation assessment and decision making in the face of various environmental and systems constraints
    corecore