103 research outputs found

    Ms. Pac-Man Versus Ghost Team CIG 2016 competition

    Get PDF
    This paper introduces the revival of the popular Ms. Pac-Man Versus Ghost Team competition. We present an updated game engine with Partial Observability constraints, a new Multi-Agent Systems approach to developing Ghost agents, and several sample controllers to ease the development of entries. A restricted communication protocol is provided for the Ghosts, providing a more challenging environment than before. The competition will debut at the IEEE Computational Intelligence and Games Conference 2016. Some preliminary results showing the effects of Partial Observability and the benefits of simple communication are also presented

    Artificial intelligence in co-operative games with partial observability

    Get PDF
    This thesis investigates Artificial Intelligence in co-operative games that feature Partial Observability. Most video games feature a combination of both co-operation, as well as Partial Observability. Co-operative games are games that feature a team of at least two agents, that must achieve a shared goal of some kind. Partial Observability is the restriction of how much of an environment that an agent can observe. The research performed in this thesis examines the challenge of creating Artificial Intelligence for co-operative games that feature Partial Observability. The main contributions are that Monte-Carlo Tree Search outperforms Genetic Algorithm based agents in solving co-operative problems without communication, the creation of a co-operative Partial Observability competition promoting Artificial Intelligence research as well as an investigation of the effect of varying Partial Observability to Artificial Intelligence, and finally the creation of a high performing Monte-Carlo Tree Search agent for the game Hanabi that uses agent modelling to rationalise about other players

    Pac-Man Conquers Academia: Two Decades of Research Using a Classic Arcade Game

    Get PDF

    The 2018 Hanabi competition

    Get PDF
    This paper outlines the Hanabi competition, first run at CIG 2018, and returning for COG 2019. Hanabi presents a useful domain for game agents which must function in a cooperative environment. The paper presents the results of the two tracks which formed the 2018 competition and introduces the learning track, a new track for 2019 which allows the agents to collect statistics across multiple games

    The 2016 Two-Player GVGAI Competition

    Get PDF
    This paper showcases the setting and results of the first Two-Player General Video Game AI competition, which ran in 2016 at the IEEE World Congress on Computational Intelligence and the IEEE Conference on Computational Intelligence and Games. The challenges for the general game AI agents are expanded in this track from the single-player version, looking at direct player interaction in both competitive and cooperative environments of various types and degrees of difficulty. The focus is on the agents not only handling multiple problems, but also having to account for another intelligent entity in the game, who is expected to work towards their own goals (winning the game). This other player will possibly interact with first agent in a more engaging way than the environment or any non-playing character may do. The top competition entries are analyzed in detail and the performance of all agents is compared across the four sets of games. The results validate the competition system in assessing generality, as well as showing Monte Carlo Tree Search continuing to dominate by winning the overall Championship. However, this approach is closely followed by Rolling Horizon Evolutionary Algorithms, employed by the winner of the second leg of the contest

    Study of artificial intelligence algorithms applied to the generation of non-playable characters in arcade games

    Full text link
    En la actualidad, el auge de la Inteligencia Artificial en diversos campos está llevando a un aumento en la investigación que se lleva a cabo en ella. Uno de estos campos es el de los videojuegos. Desde el inicio de los videojuegos, ha primado la experiencia del usuario en términos de jugabilidad y gráficos, sobre todo, prestando menor atención a la Inteligencia Artificial. Ahora, debido a que cada vez se dispone de mejores máquinas que pueden realizar acciones computacionalmente más caras con menor dificultad, se están pudiendo aplicar técnicas de Inteligencia Artificial más complejas y que aportan mejor funcionamiento y dotan a los juegos de mayor realismo. Este es el caso, por ejemplo, de la creación de agentes inteligentes que imitan el comportamiento humano de una manera más realista. En los últimos años, se han creado diversas competiciones para desarrollar y analizar técnicas de Inteligencia Artificial aplicadas a los videojuegos. Algunas de las técnicas que son objeto de estudio son la generación de niveles, como en la competición de Angry Birds; la minería de datos sacados de registros de juegos MMORPG (videojuego de rol multijugador masivo en línea) para predecir el compromiso económico de los jugadores, en la competición de minería de datos; el desarrollo de IA para desafíos de los juegos RTS (estrategia en tiempo real) tales como la incertidumbre, el procesado en tiempo real o el manejo de unidades, en la competición de StarCraft; o la investigación en PO (observabilidad parcial) en la competición de Ms. Pac-Man mediante el diseño de controladores para Pac-Man y el Equipo de fantasmas. Este trabajo se centra en esta última competición, y tiene como objetivo el desarrollo de una técnica híbrida consistente en un algoritmo genético y razonamiento basado en casos. El algoritmo genético se usa para generar y optimizar un conjunto de reglas que los fantasmas utilizan para jugar contra Ms. Pac-Man. Posteriormente, se realiza un estudio de los parámetros que intervienen en la ejecución del algoritmo genético, para ver como éstos afectan a los valores de fitness obtenidos por los agentes generados.Recently, the increase in the use of Arti cial Intelligence in di erent elds is leading to an increase in the research being carried out. One of these elds is videogames. Since the beginning of videogames, the user experience in terms of gameplay and graphics has prevailed, paying less attention to Arti cial Intelligence for creating more realistic agents and behaviours. Nowadays, due to the availability of better machines that can perform computationally expensive actions with less di culty, more complex Arti cial Intelligence techniques that provide games with better performance and more realism can be implemented. This is the case, for example, of creating intelligent agents that mimic human behaviour in a more realistic way. Di erent competitions are held ever Some of the techniques that are object for study are level generation, such as in the Angry Birds AI Competition, data mining from MMORPG (massively multiplayer online role-playing game) game logs to predict game players' economic engagement, in the Game Data Mining Competition; the development of RTS (Real-Time Strategy) game AI for solving challenging issues such as uncertainty, real-time process and unit management, in the StarCraft AI Competition; or the research into PO (Partial Observability) in the Ms. Pac-Man Vs Ghost Team Competition by designing agents for Ms. Pac-Man and the Ghost Team. This work is focused on this last competition, and has the objective of designing a hybrid technique consisting of a genetic algorithm and case-based reasoning. The genetic algorithm is used to generate and optimize set of rules that the Ghosts use ty year for research into AI techniques through videogames.o play against Ms. Pac-Man. Later, we perform an analysis of the parameters that intervene in the execution of the genetic algorithm to see how they a ect the tness values that the generated agents obtain by playing the game

    AI Researchers, Video Games Are Your Friends!

    Full text link
    If you are an artificial intelligence researcher, you should look to video games as ideal testbeds for the work you do. If you are a video game developer, you should look to AI for the technology that makes completely new types of games possible. This chapter lays out the case for both of these propositions. It asks the question "what can video games do for AI", and discusses how in particular general video game playing is the ideal testbed for artificial general intelligence research. It then asks the question "what can AI do for video games", and lays out a vision for what video games might look like if we had significantly more advanced AI at our disposal. The chapter is based on my keynote at IJCCI 2015, and is written in an attempt to be accessible to a broad audience.Comment: in Studies in Computational Intelligence Studies in Computational Intelligence, Volume 669 2017. Springe

    Arena: A General Evaluation Platform and Building Toolkit for Multi-Agent Intelligence

    Full text link
    Learning agents that are not only capable of taking tests, but also innovating is becoming a hot topic in AI. One of the most promising paths towards this vision is multi-agent learning, where agents act as the environment for each other, and improving each agent means proposing new problems for others. However, existing evaluation platforms are either not compatible with multi-agent settings, or limited to a specific game. That is, there is not yet a general evaluation platform for research on multi-agent intelligence. To this end, we introduce Arena, a general evaluation platform for multi-agent intelligence with 35 games of diverse logics and representations. Furthermore, multi-agent intelligence is still at the stage where many problems remain unexplored. Therefore, we provide a building toolkit for researchers to easily invent and build novel multi-agent problems from the provided game set based on a GUI-configurable social tree and five basic multi-agent reward schemes. Finally, we provide Python implementations of five state-of-the-art deep multi-agent reinforcement learning baselines. Along with the baseline implementations, we release a set of 100 best agents/teams that we can train with different training schemes for each game, as the base for evaluating agents with population performance. As such, the research community can perform comparisons under a stable and uniform standard. All the implementations and accompanied tutorials have been open-sourced for the community at https://sites.google.com/view/arena-unity/
    corecore