12,987 research outputs found
Scalable Planning and Learning for Multiagent POMDPs: Extended Version
Online, sample-based planning algorithms for POMDPs have shown great promise
in scaling to problems with large state spaces, but they become intractable for
large action and observation spaces. This is particularly problematic in
multiagent POMDPs where the action and observation space grows exponentially
with the number of agents. To combat this intractability, we propose a novel
scalable approach based on sample-based planning and factored value functions
that exploits structure present in many multiagent settings. This approach
applies not only in the planning case, but also in the Bayesian reinforcement
learning setting. Experimental results show that we are able to provide high
quality solutions to large multiagent planning and learning problems
Cost Adaptation for Robust Decentralized Swarm Behaviour
Decentralized receding horizon control (D-RHC) provides a mechanism for
coordination in multi-agent settings without a centralized command center.
However, combining a set of different goals, costs, and constraints to form an
efficient optimization objective for D-RHC can be difficult. To allay this
problem, we use a meta-learning process -- cost adaptation -- which generates
the optimization objective for D-RHC to solve based on a set of human-generated
priors (cost and constraint functions) and an auxiliary heuristic. We use this
adaptive D-RHC method for control of mesh-networked swarm agents. This
formulation allows a wide range of tasks to be encoded and can account for
network delays, heterogeneous capabilities, and increasingly large swarms
through the adaptation mechanism. We leverage the Unity3D game engine to build
a simulator capable of introducing artificial networking failures and delays in
the swarm. Using the simulator we validate our method on an example coordinated
exploration task. We demonstrate that cost adaptation allows for more efficient
and safer task completion under varying environment conditions and increasingly
large swarm sizes. We release our simulator and code to the community for
future work.Comment: Accepted to IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), 201
Better Optimism By Bayes: Adaptive Planning with Rich Models
The computational costs of inference and planning have confined Bayesian
model-based reinforcement learning to one of two dismal fates: powerful
Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian
non-parametric models but using simple, myopic planning strategies such as
Thompson sampling. We ask whether it is feasible and truly beneficial to
combine rich probabilistic models with a closer approximation to fully Bayesian
planning. First, we use a collection of counterexamples to show formal problems
with the over-optimism inherent in Thompson sampling. Then we leverage
state-of-the-art techniques in efficient Bayes-adaptive planning and
non-parametric Bayesian methods to perform qualitatively better than both
existing conventional algorithms and Thompson sampling on two contextual
bandit-like problems.Comment: 11 pages, 11 figure
- …