8,561 research outputs found
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Expectation Optimization with Probabilistic Guarantees in POMDPs with Discounted-sum Objectives
Partially-observable Markov decision processes (POMDPs) with discounted-sum
payoff are a standard framework to model a wide range of problems related to
decision making under uncertainty. Traditionally, the goal has been to obtain
policies that optimize the expectation of the discounted-sum payoff. A key
drawback of the expectation measure is that even low probability events with
extreme payoff can significantly affect the expectation, and thus the obtained
policies are not necessarily risk-averse. An alternate approach is to optimize
the probability that the payoff is above a certain threshold, which allows
obtaining risk-averse policies, but ignores optimization of the expectation. We
consider the expectation optimization with probabilistic guarantee (EOPG)
problem, where the goal is to optimize the expectation ensuring that the payoff
is above a given threshold with at least a specified probability. We present
several results on the EOPG problem, including the first algorithm to solve it.Comment: Full version of a paper published at IJCAI/ECAI 201
Sensor Scheduling for Energy-Efficient Target Tracking in Sensor Networks
In this paper we study the problem of tracking an object moving randomly
through a network of wireless sensors. Our objective is to devise strategies
for scheduling the sensors to optimize the tradeoff between tracking
performance and energy consumption. We cast the scheduling problem as a
Partially Observable Markov Decision Process (POMDP), where the control actions
correspond to the set of sensors to activate at each time step. Using a
bottom-up approach, we consider different sensing, motion and cost models with
increasing levels of difficulty. At the first level, the sensing regions of the
different sensors do not overlap and the target is only observed within the
sensing range of an active sensor. Then, we consider sensors with overlapping
sensing range such that the tracking error, and hence the actions of the
different sensors, are tightly coupled. Finally, we consider scenarios wherein
the target locations and sensors' observations assume values on continuous
spaces. Exact solutions are generally intractable even for the simplest models
due to the dimensionality of the information and action spaces. Hence, we
devise approximate solution techniques, and in some cases derive lower bounds
on the optimal tradeoff curves. The generated scheduling policies, albeit
suboptimal, often provide close-to-optimal energy-tracking tradeoffs
Sensor Management for Tracking in Sensor Networks
We study the problem of tracking an object moving through a network of
wireless sensors. In order to conserve energy, the sensors may be put into a
sleep mode with a timer that determines their sleep duration. It is assumed
that an asleep sensor cannot be communicated with or woken up, and hence the
sleep duration needs to be determined at the time the sensor goes to sleep
based on all the information available to the sensor. Having sleeping sensors
in the network could result in degraded tracking performance, therefore, there
is a tradeoff between energy usage and tracking performance. We design sleeping
policies that attempt to optimize this tradeoff and characterize their
performance. As an extension to our previous work in this area [1], we consider
generalized models for object movement, object sensing, and tracking cost. For
discrete state spaces and continuous Gaussian observations, we derive a lower
bound on the optimal energy-tracking tradeoff. It is shown that in the low
tracking error regime, the generated policies approach the derived lower bound
- …