9 research outputs found
Learning environment properties in Partially Observable Monte Carlo Planning
We tackle the problem of learning state-variable relationships in Partially Observable Markov Decision Processes to improve planning performance on mobile robots. The proposed approach extends Partially Observable Monte Carlo Planning (POMCP) and represents state-variable relationships with Markov Random Fields. A ROS-based implementation of the approach is proposed and evaluated in rocksample, a standard benchmark for probabilistic planning under uncertainty. Experiments have been performed in simulation with Gazebo. Results show that the proposed approach allows to effectively learn state- variable probabilistic constraints on ROS-based robotic platforms and to use them in subsequent episodes to outperform standard POMC
On-Robot Bayesian Reinforcement Learning for POMDPs
Robot learning is often difficult due to the expense of gathering data. The
need for large amounts of data can, and should, be tackled with effective
algorithms and leveraging expert information on robot dynamics. Bayesian
reinforcement learning (BRL), thanks to its sample efficiency and ability to
exploit prior knowledge, is uniquely positioned as such a solution method.
Unfortunately, the application of BRL has been limited due to the difficulties
of representing expert knowledge as well as solving the subsequent inference
problem. This paper advances BRL for robotics by proposing a specialized
framework for physical systems. In particular, we capture this knowledge in a
factored representation, then demonstrate the posterior factorizes in a similar
shape, and ultimately formalize the model in a Bayesian framework. We then
introduce a sample-based online solution method, based on Monte-Carlo tree
search and particle filtering, specialized to solve the resulting model. This
approach can, for example, utilize typical low-level robot simulators and
handle uncertainty over unknown dynamics of the environment. We empirically
demonstrate its efficiency by performing on-robot learning in two human-robot
interaction tasks with uncertainty about human behavior, achieving near-optimal
performance after only a handful of real-world episodes. A video of learned
policies is at https://youtu.be/H9xp60ngOes.Comment: Accepted at IROS-2023 (Detroit, USA
Toward data-driven solutions to interactive dynamic influence diagrams
With the availability of significant amount of data, data-driven decision making becomes an alternative way for solving complex multiagent decision problems. Instead of using domain knowledge to explicitly build decision models, the data-driven approach learns decisions (probably optimal ones) from available data. This removes the knowledge bottleneck in the traditional knowledge-driven decision making, which requires a strong support from domain experts. In this paper, we study data-driven decision making in the context of interactive dynamic influence diagrams (I-DIDs)—a general framework for multiagent sequential decision making under uncertainty. We propose a data-driven framework to solve the I-DIDs model and focus on learning the behavior of other agents in problem domains. The challenge is on learning a complete policy tree that will be embedded in the I-DIDs models due to limited data. We propose two new methods to develop complete policy trees for the other agents in the I-DIDs. The first method uses a simple clustering process, while the second one employs sophisticated statistical checks. We analyze the proposed algorithms in a theoretical way and experiment them over two problem domains
Bayesian Reinforcement Learning in Factored POMDPs.
Bayesian approaches provide a principled solution to the
exploration-exploitation trade-off in Reinforcement Learning. Typical
approaches, however, either assume a fully observable environment or scale
poorly. This work introduces the Factored Bayes-Adaptive POMDP model, a
framework that is able to exploit the underlying structure while learning the
dynamics in partially observable systems. We also present a belief tracking
method to approximate the joint posterior over state and model variables, and
an adaptation of the Monte-Carlo Tree Search solution method, which together
are capable of solving the underlying problem near-optimally. Our method is
able to learn efficiently given a known factorization or also learn the
factorization and the model parameters at the same time. We demonstrate that
this approach is able to outperform current methods and tackle problems that
were previously infeasible