396 research outputs found
Stochastic Shortest Path with Energy Constraints in POMDPs
We consider partially observable Markov decision processes (POMDPs) with a
set of target states and positive integer costs associated with every
transition. The traditional optimization objective (stochastic shortest path)
asks to minimize the expected total cost until the target set is reached. We
extend the traditional framework of POMDPs to model energy consumption, which
represents a hard constraint. The energy levels may increase and decrease with
transitions, and the hard constraint requires that the energy level must remain
positive in all steps till the target is reached. First, we present a novel
algorithm for solving POMDPs with energy levels, developing on existing POMDP
solvers and using RTDP as its main method. Our second contribution is related
to policy representation. For larger POMDP instances the policies computed by
existing solvers are too large to be understandable. We present an automated
procedure based on machine learning techniques that automatically extracts
important decisions of the policy allowing us to compute succinct human
readable policies. Finally, we show experimentally that our algorithm performs
well and computes succinct policies on a number of POMDP instances from the
literature that were naturally enhanced with energy levels.Comment: Technical report accompanying a paper published in proceedings of
AAMAS 201
Efficient Model Learning for Human-Robot Collaborative Tasks
We present a framework for learning human user models from joint-action
demonstrations that enables the robot to compute a robust policy for a
collaborative task with a human. The learning takes place completely
automatically, without any human intervention. First, we describe the
clustering of demonstrated action sequences into different human types using an
unsupervised learning algorithm. These demonstrated sequences are also used by
the robot to learn a reward function that is representative for each type,
through the employment of an inverse reinforcement learning algorithm. The
learned model is then used as part of a Mixed Observability Markov Decision
Process formulation, wherein the human type is a partially observable variable.
With this framework, we can infer, either offline or online, the human type of
a new user that was not included in the training set, and can compute a policy
for the robot that will be aligned to the preference of this new user and will
be robust to deviations of the human actions from prior demonstrations. Finally
we validate the approach using data collected in human subject experiments, and
conduct proof-of-concept demonstrations in which a person performs a
collaborative task with a small industrial robot
On-Robot Bayesian Reinforcement Learning for POMDPs
Robot learning is often difficult due to the expense of gathering data. The
need for large amounts of data can, and should, be tackled with effective
algorithms and leveraging expert information on robot dynamics. Bayesian
reinforcement learning (BRL), thanks to its sample efficiency and ability to
exploit prior knowledge, is uniquely positioned as such a solution method.
Unfortunately, the application of BRL has been limited due to the difficulties
of representing expert knowledge as well as solving the subsequent inference
problem. This paper advances BRL for robotics by proposing a specialized
framework for physical systems. In particular, we capture this knowledge in a
factored representation, then demonstrate the posterior factorizes in a similar
shape, and ultimately formalize the model in a Bayesian framework. We then
introduce a sample-based online solution method, based on Monte-Carlo tree
search and particle filtering, specialized to solve the resulting model. This
approach can, for example, utilize typical low-level robot simulators and
handle uncertainty over unknown dynamics of the environment. We empirically
demonstrate its efficiency by performing on-robot learning in two human-robot
interaction tasks with uncertainty about human behavior, achieving near-optimal
performance after only a handful of real-world episodes. A video of learned
policies is at https://youtu.be/H9xp60ngOes.Comment: Accepted at IROS-2023 (Detroit, USA
Hierarchical Reinforcement Learning under Mixed Observability
The framework of mixed observable Markov decision processes (MOMDP) models
many robotic domains in which some state variables are fully observable while
others are not. In this work, we identify a significant subclass of MOMDPs
defined by how actions influence the fully observable components of the state
and how those, in turn, influence the partially observable components and the
rewards. This unique property allows for a two-level hierarchical approach we
call HIerarchical Reinforcement Learning under Mixed Observability (HILMO),
which restricts partial observability to the top level while the bottom level
remains fully observable, enabling higher learning efficiency. The top level
produces desired goals to be reached by the bottom level until the task is
solved. We further develop theoretical guarantees to show that our approach can
achieve optimal and quasi-optimal behavior under mild assumptions. Empirical
results on long-horizon continuous control tasks demonstrate the efficacy and
efficiency of our approach in terms of improved success rate, sample
efficiency, and wall-clock training time. We also deploy policies learned in
simulation on a real robot.Comment: Accepted at the 15th International Workshop on the Algorithmic
Foundations of Robotics (WAFR) 2022, University of Maryland, College Park.
The first two authors contributed equall
- …