3,192 research outputs found
Arena: A General Evaluation Platform and Building Toolkit for Multi-Agent Intelligence
Learning agents that are not only capable of taking tests, but also
innovating is becoming a hot topic in AI. One of the most promising paths
towards this vision is multi-agent learning, where agents act as the
environment for each other, and improving each agent means proposing new
problems for others. However, existing evaluation platforms are either not
compatible with multi-agent settings, or limited to a specific game. That is,
there is not yet a general evaluation platform for research on multi-agent
intelligence. To this end, we introduce Arena, a general evaluation platform
for multi-agent intelligence with 35 games of diverse logics and
representations. Furthermore, multi-agent intelligence is still at the stage
where many problems remain unexplored. Therefore, we provide a building toolkit
for researchers to easily invent and build novel multi-agent problems from the
provided game set based on a GUI-configurable social tree and five basic
multi-agent reward schemes. Finally, we provide Python implementations of five
state-of-the-art deep multi-agent reinforcement learning baselines. Along with
the baseline implementations, we release a set of 100 best agents/teams that we
can train with different training schemes for each game, as the base for
evaluating agents with population performance. As such, the research community
can perform comparisons under a stable and uniform standard. All the
implementations and accompanied tutorials have been open-sourced for the
community at https://sites.google.com/view/arena-unity/
The Dreaming Variational Autoencoder for Reinforcement Learning Environments
Reinforcement learning has shown great potential in generalizing over raw
sensory data using only a single neural network for value optimization. There
are several challenges in the current state-of-the-art reinforcement learning
algorithms that prevent them from converging towards the global optima. It is
likely that the solution to these problems lies in short- and long-term
planning, exploration and memory management for reinforcement learning
algorithms. Games are often used to benchmark reinforcement learning algorithms
as they provide a flexible, reproducible, and easy to control environment.
Regardless, few games feature a state-space where results in exploration,
memory, and planning are easily perceived. This paper presents The Dreaming
Variational Autoencoder (DVAE), a neural network based generative modeling
architecture for exploration in environments with sparse feedback. We further
present Deep Maze, a novel and flexible maze engine that challenges DVAE in
partial and fully-observable state-spaces, long-horizon tasks, and
deterministic and stochastic problems. We show initial findings and encourage
further work in reinforcement learning driven by generative exploration.Comment: Best Student Paper Award, Proceedings of the 38th SGAI International
Conference on Artificial Intelligence, Cambridge, UK, 2018, Artificial
Intelligence XXXV, 201
Deep Reinforcement Learning on a Budget: 3D Control and Reasoning Without a Supercomputer
An important goal of research in Deep Reinforcement Learning in mobile
robotics is to train agents capable of solving complex tasks, which require a
high level of scene understanding and reasoning from an egocentric perspective.
When trained from simulations, optimal environments should satisfy a currently
unobtainable combination of high-fidelity photographic observations, massive
amounts of different environment configurations and fast simulation speeds. In
this paper we argue that research on training agents capable of complex
reasoning can be simplified by decoupling from the requirement of high fidelity
photographic observations. We present a suite of tasks requiring complex
reasoning and exploration in continuous, partially observable 3D environments.
The objective is to provide challenging scenarios and a robust baseline agent
architecture that can be trained on mid-range consumer hardware in under 24h.
Our scenarios combine two key advantages: (i) they are based on a simple but
highly efficient 3D environment (ViZDoom) which allows high speed simulation
(12000fps); (ii) the scenarios provide the user with a range of difficulty
settings, in order to identify the limitations of current state of the art
algorithms and network architectures. We aim to increase accessibility to the
field of Deep-RL by providing baselines for challenging scenarios where new
ideas can be iterated on quickly. We argue that the community should be able to
address challenging problems in reasoning of mobile agents without the need for
a large compute infrastructure
- …