7,257 research outputs found
Scalable Planning and Learning for Multiagent POMDPs: Extended Version
Online, sample-based planning algorithms for POMDPs have shown great promise
in scaling to problems with large state spaces, but they become intractable for
large action and observation spaces. This is particularly problematic in
multiagent POMDPs where the action and observation space grows exponentially
with the number of agents. To combat this intractability, we propose a novel
scalable approach based on sample-based planning and factored value functions
that exploits structure present in many multiagent settings. This approach
applies not only in the planning case, but also in the Bayesian reinforcement
learning setting. Experimental results show that we are able to provide high
quality solutions to large multiagent planning and learning problems
Scale-free memory model for multiagent reinforcement learning. Mean field approximation and rock-paper-scissors dynamics
A continuous time model for multiagent systems governed by reinforcement
learning with scale-free memory is developed. The agents are assumed to act
independently of one another in optimizing their choice of possible actions via
trial-and-error search. To gain awareness about the action value the agents
accumulate in their memory the rewards obtained from taking a specific action
at each moment of time. The contribution of the rewards in the past to the
agent current perception of action value is described by an integral operator
with a power-law kernel. Finally a fractional differential equation governing
the system dynamics is obtained. The agents are considered to interact with one
another implicitly via the reward of one agent depending on the choice of the
other agents. The pairwise interaction model is adopted to describe this
effect. As a specific example of systems with non-transitive interactions, a
two agent and three agent systems of the rock-paper-scissors type are analyzed
in detail, including the stability analysis and numerical simulation.
Scale-free memory is demonstrated to cause complex dynamics of the systems at
hand. In particular, it is shown that there can be simultaneously two modes of
the system instability undergoing subcritical and supercritical bifurcation,
with the latter one exhibiting anomalous oscillations with the amplitude and
period growing with time. Besides, the instability onset via this supercritical
mode may be regarded as "altruism self-organization". For the three agent
system the instability dynamics is found to be rather irregular and can be
composed of alternate fragments of oscillations different in their properties.Comment: 17 pages, 7 figur
Influence-Optimistic Local Values for Multiagent Planning --- Extended Version
Recent years have seen the development of methods for multiagent planning
under uncertainty that scale to tens or even hundreds of agents. However, most
of these methods either make restrictive assumptions on the problem domain, or
provide approximate solutions without any guarantees on quality. Methods in the
former category typically build on heuristic search using upper bounds on the
value function. Unfortunately, no techniques exist to compute such upper bounds
for problems with non-factored value functions. To allow for meaningful
benchmarking through measurable quality guarantees on a very general class of
problems, this paper introduces a family of influence-optimistic upper bounds
for factored decentralized partially observable Markov decision processes
(Dec-POMDPs) that do not have factored value functions. Intuitively, we derive
bounds on very large multiagent planning problems by subdividing them in
sub-problems, and at each of these sub-problems making optimistic assumptions
with respect to the influence that will be exerted by the rest of the system.
We numerically compare the different upper bounds and demonstrate how we can
achieve a non-trivial guarantee that a heuristic solution for problems with
hundreds of agents is close to optimal. Furthermore, we provide evidence that
the upper bounds may improve the effectiveness of heuristic influence search,
and discuss further potential applications to multiagent planning.Comment: Long version of IJCAI 2015 paper (and extended abstract at AAMAS
2015
Q-CP: Learning Action Values for Cooperative Planning
Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance
- …