9,402 research outputs found
Arena: A General Evaluation Platform and Building Toolkit for Multi-Agent Intelligence
Learning agents that are not only capable of taking tests, but also
innovating is becoming a hot topic in AI. One of the most promising paths
towards this vision is multi-agent learning, where agents act as the
environment for each other, and improving each agent means proposing new
problems for others. However, existing evaluation platforms are either not
compatible with multi-agent settings, or limited to a specific game. That is,
there is not yet a general evaluation platform for research on multi-agent
intelligence. To this end, we introduce Arena, a general evaluation platform
for multi-agent intelligence with 35 games of diverse logics and
representations. Furthermore, multi-agent intelligence is still at the stage
where many problems remain unexplored. Therefore, we provide a building toolkit
for researchers to easily invent and build novel multi-agent problems from the
provided game set based on a GUI-configurable social tree and five basic
multi-agent reward schemes. Finally, we provide Python implementations of five
state-of-the-art deep multi-agent reinforcement learning baselines. Along with
the baseline implementations, we release a set of 100 best agents/teams that we
can train with different training schemes for each game, as the base for
evaluating agents with population performance. As such, the research community
can perform comparisons under a stable and uniform standard. All the
implementations and accompanied tutorials have been open-sourced for the
community at https://sites.google.com/view/arena-unity/
Generating Levels That Teach Mechanics
The automatic generation of game tutorials is a challenging AI problem. While
it is possible to generate annotations and instructions that explain to the
player how the game is played, this paper focuses on generating a gameplay
experience that introduces the player to a game mechanic. It evolves small
levels for the Mario AI Framework that can only be beaten by an agent that
knows how to perform specific actions in the game. It uses variations of a
perfect A* agent that are limited in various ways, such as not being able to
jump high or see enemies, to test how failing to do certain actions can stop
the player from beating the level.Comment: 8 pages, 7 figures, PCG Workshop at FDG 2018, 9th International
Workshop on Procedural Content Generation (PCG2018
Using a high-level language to build a poker playing agent
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200
- …