28 research outputs found

    Fifth Aeon – A.I Competition and Balancer

    Get PDF
    Collectible Card Games (CCG) are one of the most popular types of games in both digital and physical space. Despite their popularity, there is a great deal of room for exploration into the application of artificial intelligence in order to enhance CCG gameplay and development. This paper presents Fifth Aeon a novel and open source CCG built to run in browsers and two A.I applications built upon Fifth Aeon. The first application is an artificial intelligence competition run on the Fifth Aeon game. The second is an automatic balancing system capable of helping a designer create new cards that do not upset the balance of an existing collectible card game. The submissions to the A.I competition include one that plays substantially better than the existing Fifth Aeon A.I with a higher winrate across multiple game formats. The balancer system also demonstrates an ability to automatically balance several types of cards against a wide variety of parameters. These results help pave the way to cheaper CCG development with more compelling A.I opponents

    Opponent awareness at all levels of the multiagent reinforcement learning stack

    Get PDF
    Multiagent Reinforcement Learning (MARL) has experienced numerous high profile successes in recent years in terms of generating superhuman gameplaying agents for a wide variety of videogames. Despite these successes, MARL techniques have failed to be adopted by game developers as a useful tool to be used when developing their games, often citing the high computational cost associated with training agents alongside the difficulty of understanding and evaluating MARL methods as the two main obstacles. This thesis attempts to close this gap by introducing an informative modular abstraction under which any Reinforcement Learning (RL) training pipeline can be studied. This is defined as the MARL stack, which explicitly expresses any MARL pipeline as an environment where agents equipped with learning algorithms train via simulated experience as orchestrated by a training scheme. Within the context of 2-player zero-sum games, different approaches at granting opponent awareness at all levels of the proposed MARL stack are explored in broad study of the field. At the level of training schemes, a grouping generalization over many modern MARL training schemes is introduced under a unified framework. Empirical results are shown which demonstrate that the decision over which sequence of opponents a learning agent will face during training greatly affects learning dynamics. At the agent level, the introduction of opponent modelling in state-of-the art algorithms is explored as a way of generating targeted best responses towards opponents encountered during training, improving upon the sample efficiency of these methods. At the environment level the use of MARL as a game design tool is explored by using MARL trained agents as metagame evaluators inside an automated process of game balancing
    corecore