21 research outputs found
Sensor Synthesis for POMDPs with Reachability Objectives
Partially observable Markov decision processes (POMDPs) are widely used in
probabilistic planning problems in which an agent interacts with an environment
using noisy and imprecise sensors. We study a setting in which the sensors are
only partially defined and the goal is to synthesize "weakest" additional
sensors, such that in the resulting POMDP, there is a small-memory policy for
the agent that almost-surely (with probability~1) satisfies a reachability
objective. We show that the problem is NP-complete, and present a symbolic
algorithm by encoding the problem into SAT instances. We illustrate trade-offs
between the amount of memory of the policy and the number of additional sensors
on a simple example. We have implemented our approach and consider three
classical POMDP examples from the literature, and show that in all the examples
the number of sensors can be significantly decreased (as compared to the
existing solutions in the literature) without increasing the complexity of the
policies.Comment: arXiv admin note: text overlap with arXiv:1511.0845
Gambler's Ruin Bandit Problem
In this paper, we propose a new multi-armed bandit problem called the
Gambler's Ruin Bandit Problem (GRBP). In the GRBP, the learner proceeds in a
sequence of rounds, where each round is a Markov Decision Process (MDP) with
two actions (arms): a continuation action that moves the learner randomly over
the state space around the current state; and a terminal action that moves the
learner directly into one of the two terminal states (goal and dead-end state).
The current round ends when a terminal state is reached, and the learner incurs
a positive reward only when the goal state is reached. The objective of the
learner is to maximize its long-term reward (expected number of times the goal
state is reached), without having any prior knowledge on the state transition
probabilities. We first prove a result on the form of the optimal policy for
the GRBP. Then, we define the regret of the learner with respect to an
omnipotent oracle, which acts optimally in each round, and prove that it
increases logarithmically over rounds. We also identify a condition under which
the learner's regret is bounded. A potential application of the GRBP is optimal
medical treatment assignment, in which the continuation action corresponds to a
conservative treatment and the terminal action corresponds to a risky treatment
such as surgery
Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes
Autonomous systems are often required to operate in partially observable
environments. They must reliably execute a specified objective even with
incomplete information about the state of the environment. We propose a
methodology to synthesize policies that satisfy a linear temporal logic formula
in a partially observable Markov decision process (POMDP). By formulating a
planning problem, we show how to use point-based value iteration methods to
efficiently approximate the maximum probability of satisfying a desired logical
formula and compute the associated belief state policy. We demonstrate that our
method scales to large POMDP domains and provides strong bounds on the
performance of the resulting policy.Comment: 8 pages, 3 figures, AAAI 202