121,076 research outputs found
Multi Type Mean Field Reinforcement Learning
Mean field theory provides an effective way of scaling multiagent
reinforcement learning algorithms to environments with many agents that can be
abstracted by a virtual mean agent. In this paper, we extend mean field
multiagent algorithms to multiple types. The types enable the relaxation of a
core assumption in mean field games, which is that all agents in the
environment are playing almost similar strategies and have the same goal. We
conduct experiments on three different testbeds for the field of many agent
reinforcement learning, based on the standard MAgents framework. We consider
two different kinds of mean field games: a) Games where agents belong to
predefined types that are known a priori and b) Games where the type of each
agent is unknown and therefore must be learned based on observations. We
introduce new algorithms for each type of game and demonstrate their superior
performance over state of the art algorithms that assume that all agents belong
to the same type and other baseline algorithms in the MAgent framework.Comment: Paper to appear in the Proceedings of International Conference on
Autonomous Agents and Multi-Agent Systems (AAMAS) 2020. Revised version has
some typos correcte
Deep learning for video game playing
In this article, we review recent Deep Learning advances in the context of
how they have been applied to play different types of video games such as
first-person shooters, arcade games, and real-time strategy games. We analyze
the unique requirements that different game genres pose to a deep learning
system and highlight important open challenges in the context of applying these
machine learning methods to video games, such as general game playing, dealing
with extremely large decision spaces and sparse rewards
Deep Reinforcement Learning for Swarm Systems
Recently, deep reinforcement learning (RL) methods have been applied
successfully to multi-agent scenarios. Typically, these methods rely on a
concatenation of agent states to represent the information content required for
decentralized decision making. However, concatenation scales poorly to swarm
systems with a large number of homogeneous agents as it does not exploit the
fundamental properties inherent to these systems: (i) the agents in the swarm
are interchangeable and (ii) the exact number of agents in the swarm is
irrelevant. Therefore, we propose a new state representation for deep
multi-agent RL based on mean embeddings of distributions. We treat the agents
as samples of a distribution and use the empirical mean embedding as input for
a decentralized policy. We define different feature spaces of the mean
embedding using histograms, radial basis functions and a neural network learned
end-to-end. We evaluate the representation on two well known problems from the
swarm literature (rendezvous and pursuit evasion), in a globally and locally
observable setup. For the local setup we furthermore introduce simple
communication protocols. Of all approaches, the mean embedding representation
using neural network features enables the richest information exchange between
neighboring agents facilitating the development of more complex collective
strategies.Comment: 31 pages, 12 figures, version 3 (published in JMLR Volume 20
Factorized Q-Learning for Large-Scale Multi-Agent Systems
Deep Q-learning has achieved significant success in single-agent decision
making tasks. However, it is challenging to extend Q-learning to large-scale
multi-agent scenarios, due to the explosion of action space resulting from the
complex dynamics between the environment and the agents. In this paper, we
propose to make the computation of multi-agent Q-learning tractable by treating
the Q-function (w.r.t. state and joint-action) as a high-order high-dimensional
tensor and then approximate it with factorized pairwise interactions.
Furthermore, we utilize a composite deep neural network architecture for
computing the factorized Q-function, share the model parameters among all the
agents within the same group, and estimate the agents' optimal joint actions
through a coordinate descent type algorithm. All these simplifications greatly
reduce the model complexity and accelerate the learning process. Extensive
experiments on two different multi-agent problems demonstrate the performance
gain of our proposed approach in comparison with strong baselines, particularly
when there are a large number of agents.Comment: 7 pages, 5 figures, DAI 201
- …