5,855 research outputs found
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Expectation Optimization with Probabilistic Guarantees in POMDPs with Discounted-sum Objectives
Partially-observable Markov decision processes (POMDPs) with discounted-sum
payoff are a standard framework to model a wide range of problems related to
decision making under uncertainty. Traditionally, the goal has been to obtain
policies that optimize the expectation of the discounted-sum payoff. A key
drawback of the expectation measure is that even low probability events with
extreme payoff can significantly affect the expectation, and thus the obtained
policies are not necessarily risk-averse. An alternate approach is to optimize
the probability that the payoff is above a certain threshold, which allows
obtaining risk-averse policies, but ignores optimization of the expectation. We
consider the expectation optimization with probabilistic guarantee (EOPG)
problem, where the goal is to optimize the expectation ensuring that the payoff
is above a given threshold with at least a specified probability. We present
several results on the EOPG problem, including the first algorithm to solve it.Comment: Full version of a paper published at IJCAI/ECAI 201
Sensor Scheduling for Energy-Efficient Target Tracking in Sensor Networks
In this paper we study the problem of tracking an object moving randomly
through a network of wireless sensors. Our objective is to devise strategies
for scheduling the sensors to optimize the tradeoff between tracking
performance and energy consumption. We cast the scheduling problem as a
Partially Observable Markov Decision Process (POMDP), where the control actions
correspond to the set of sensors to activate at each time step. Using a
bottom-up approach, we consider different sensing, motion and cost models with
increasing levels of difficulty. At the first level, the sensing regions of the
different sensors do not overlap and the target is only observed within the
sensing range of an active sensor. Then, we consider sensors with overlapping
sensing range such that the tracking error, and hence the actions of the
different sensors, are tightly coupled. Finally, we consider scenarios wherein
the target locations and sensors' observations assume values on continuous
spaces. Exact solutions are generally intractable even for the simplest models
due to the dimensionality of the information and action spaces. Hence, we
devise approximate solution techniques, and in some cases derive lower bounds
on the optimal tradeoff curves. The generated scheduling policies, albeit
suboptimal, often provide close-to-optimal energy-tracking tradeoffs
A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains
Partially observable Markov decision processes (POMDPs) are a natural model
for planning problems where effects of actions are nondeterministic and the
state of the world is not completely observable. It is difficult to solve
POMDPs exactly. This paper proposes a new approximation scheme. The basic idea
is to transform a POMDP into another one where additional information is
provided by an oracle. The oracle informs the planning agent that the current
state of the world is in a certain region. The transformed POMDP is
consequently said to be region observable. It is easier to solve than the
original POMDP. We propose to solve the transformed POMDP and use its optimal
policy to construct an approximate policy for the original POMDP. By
controlling the amount of additional information that the oracle provides, it
is possible to find a proper tradeoff between computational time and
approximation quality. In terms of algorithmic contributions, we study in
details how to exploit region observability in solving the transformed POMDP.
To facilitate the study, we also propose a new exact algorithm for general
POMDPs. The algorithm is conceptually simple and yet is significantly more
efficient than all previous exact algorithms.Comment: See http://www.jair.org/ for any accompanying file
Stochastic Shortest Path with Energy Constraints in POMDPs
We consider partially observable Markov decision processes (POMDPs) with a
set of target states and positive integer costs associated with every
transition. The traditional optimization objective (stochastic shortest path)
asks to minimize the expected total cost until the target set is reached. We
extend the traditional framework of POMDPs to model energy consumption, which
represents a hard constraint. The energy levels may increase and decrease with
transitions, and the hard constraint requires that the energy level must remain
positive in all steps till the target is reached. First, we present a novel
algorithm for solving POMDPs with energy levels, developing on existing POMDP
solvers and using RTDP as its main method. Our second contribution is related
to policy representation. For larger POMDP instances the policies computed by
existing solvers are too large to be understandable. We present an automated
procedure based on machine learning techniques that automatically extracts
important decisions of the policy allowing us to compute succinct human
readable policies. Finally, we show experimentally that our algorithm performs
well and computes succinct policies on a number of POMDP instances from the
literature that were naturally enhanced with energy levels.Comment: Technical report accompanying a paper published in proceedings of
AAMAS 201
Anytime Point-Based Approximations for Large POMDPs
The Partially Observable Markov Decision Process has long been recognized as
a rich framework for real-world planning and control problems, especially in
robotics. However exact solutions in this framework are typically
computationally intractable for all but the smallest problems. A well-known
technique for speeding up POMDP solving involves performing value backups at
specific belief points, rather than over the entire belief simplex. The
efficiency of this approach, however, depends greatly on the selection of
points. This paper presents a set of novel techniques for selecting informative
belief points which work well in practice. The point selection procedure is
combined with point-based value backups to form an effective anytime POMDP
algorithm called Point-Based Value Iteration (PBVI). The first aim of this
paper is to introduce this algorithm and present a theoretical analysis
justifying the choice of belief selection technique. The second aim of this
paper is to provide a thorough empirical comparison between PBVI and other
state-of-the-art POMDP methods, in particular the Perseus algorithm, in an
effort to highlight their similarities and differences. Evaluation is performed
using both standard POMDP domains and realistic robotic tasks
- …