7 research outputs found

    Analyzing the robustness of general video game playing agents

    Get PDF
    This paper presents a study on the robustness and variability of performance of general video game-playing agents. Agents analyzed includes those that won the different legs of the 2014 and 2015 General Video Game AI Competitions, and two sample agents distributed with its framework. Initially, these agents are run in four games and ranked according to the rules of the competition. Then, different modifications to the reward signal of the games are proposed and noise is introduced in either the actions executed by the controller, their forward model, or both. Results show that it is possible to produce a significant change in the rankings by introducing the modifications proposed here. This is an important result because it enables the set of human-authored games to be automatically expanded by adding parameter-varied versions that add information and insight into the relative strengths of the agents under test. Results also show that some controllers perform well under almost all conditions, a testament to the robustness of the GVGAI benchmark

    Finding Game Levels with the Right Difficulty in a Few Trials through Intelligent Trial-and-Error

    Get PDF
    Methods for dynamic difficulty adjustment allow games to be tailored to particular players to maximize their engagement. However, current methods often only modify a limited set of game features such as the difficulty of the opponents, or the availability of resources. Other approaches, such as experience-driven Procedural Content Generation (PCG), can generate complete levels with desired properties such as levels that are neither too hard nor too easy, but require many iterations. This paper presents a method that can generate and search for complete levels with a specific target difficulty in only a few trials. This advance is enabled by through an Intelligent Trial-and-Error algorithm, originally developed to allow robots to adapt quickly. Our algorithm first creates a large variety of different levels that vary across predefined dimensions such as leniency or map coverage. The performance of an AI playing agent on these maps gives a proxy for how difficult the level would be for another AI agent (e.g. one that employs Monte Carlo Tree Search instead of Greedy Tree Search); using this information, a Bayesian Optimization procedure is deployed, updating the difficulty of the prior map to reflect the ability of the agent. The approach can reliably find levels with a specific target difficulty for a variety of planning agents in only a few trials, while maintaining an understanding of their skill landscape.Comment: To be presented in the Conference on Games 202

    A continuous information gain measure to find the most discriminatory problems for AI benchmarking

    Get PDF
    This paper introduces an information-theoretic method for selecting a subset of problems which gives the most information about a group of problem-solving algorithms. This method was tested on the games in the General Video Game AI (GVGAI) framework, allowing us to identify a smaller set of games that still gives a large amount of information about the abilities of different game-playing agents. This approach can be used to make agent testing more efficient. We can achieve almost as good discriminatory accuracy when testing on only a handful of games as when testing on more than a hundred games, something which is often computationally infeasible. Furthermore, this method can be extended to study the dimensions of the effective variance in game design between these games, allowing us to identify which games differentiate between agents in the most complementary ways

    Collaborative agent gameplay in the Pandemic board game

    Get PDF
    While artificial intelligence has been applied to control players’ decisions in board games for over half a century, little attention is given to games with no player competition. Pandemic is an exemplar collaborative board game where all players coordinate to overcome challenges posed by events occurring during the game’s progression. This paper proposes an artificial agent which controls all players’ actions and balances chances of winning versus risk of losing in this highly stochastic environment. The agent applies a Rolling Horizon Evolutionary Algorithm on an abstraction of the game-state that lowers the branching factor and simulates the game’s stochasticity. Results show that the proposed algorithm can find winning strategies more consistently in different games of varying difficulty. The impact of a number of state evaluation metrics is explored, balancing between optimistic strategies that favor winning and pessimistic strategies that guard against losing.peer-reviewe

    Generation and Analysis of Content for Physics-Based Video Games

    Get PDF
    The development of artificial intelligence (AI) techniques that can assist with the creation and analysis of digital content is a broad and challenging task for researchers. This topic has been most prevalent in the field of game AI research, where games are used as a testbed for solving more complex real-world problems. One of the major issues with prior AI-assisted content creation methods for games has been a lack of direct comparability to real-world environments, particularly those with realistic physical properties to consider. Creating content for such environments typically requires physics-based reasoning, which imposes many additional complications and restrictions that must be considered. Addressing and developing methods that can deal with these physical constraints, even if they are only within simulated game environments, is an important and challenging task for AI techniques that intend to be used in real-world situations. The research presented in this thesis describes several approaches to creating and analysing levels for the physics-based puzzle game Angry Birds, which features a realistic 2D environment. This research was multidisciplinary in nature and covers a wide variety of different AI fields, leading to this thesis being presented as a compilation of published work. The central part of this thesis consists of procedurally generating levels for physics-based games similar to those in Angry Birds. This predominantly involves creating and placing stable structures made up of many smaller blocks, as well as other level elements. Multiple approaches are presented, including both fully autonomous and human-AI collaborative methodologies. In addition, several analyses of Angry Birds levels were carried out using current state-of-the-art agents. A hyper-agent was developed that uses machine learning to estimate the performance of each agent in a portfolio for an unknown level, allowing it to select the one most likely to succeed. Agent performance on levels that contain deceptive or creative properties was also investigated, allowing determination of the current strengths and weaknesses of different AI techniques. The observed variability in performance across levels for different AI techniques led to the development of an adaptive level generation system, allowing for the dynamic creation of increasingly challenging levels over time based on agent performance analysis. An additional study also investigated the theoretical complexity of Angry Birds levels from a computational perspective. While this research is predominately applied to video games with physics-based simulated environments, the challenges and problems solved by the proposed methods also have significant real-world potential and applications

    Automated iterative game design

    Get PDF
    Computational systems to model aspects of iterative game design were proposed, encompassing: game generation, sampling behaviors in a game, analyzing game behaviors for patterns, and iteratively altering a game design. Explicit models of the actions in games as planning operators allowed an intelligent system to reason about how actions and action sequences affect gameplay and to create new mechanics. Metrics to analyze differences in player strategies were presented and were able to identify flaws in game designs. An intelligent system learned design knowledge about gameplay and was able to reduce the number of design iterations needed during playtesting a game to achieve a design goal. Implications for how intelligent systems augment and automate human game design practices are discussed.Ph.D

    Beyond Playing to Win: Elicit General Gameplaying Agents with Distinct Behaviours to Assist Game Development and Testing

    Get PDF
    General Video Game Playing (GVGP) creates agents capable of playing several different games while maintaining competitive performance. Even when the generality of these agents has evident potential, there is a lack of research looking for applications for them. This work explores filling that void by advocating the integration of GVGP agents into the game development process. Additionally, it proposes studying the GVGP agents from a Player Experience perspective to facilitate their use in games as an alternative AI approach. GVGP agents are essentially designed to win and achieve a high score. However, the players' actions are driven by different motivations, resulting in diverse behaviours. These motivations may ultimately involve winning, but it is not necessarily their primary goal. Thus, why are agents designed with merely this purpose in mind? This work considers that the path that eventually allows finding applications for the agents starts with eliciting differentiated behaviours by providing them with objectives beyond winning. It introduces the concept of heuristic diversification that, in the scope of search algorithms, refers to isolating the evaluation function of the controllers providing the goals externally without affecting their foundation. This work proposes that a team of GVGP agents with differentiated behaviours can assist in the game development and testing processes. The solution applies heuristic diversification and describes the behaviour of an agent with simplicity and easiness to evolve. Diverse behaviours can be generated and used to assemble the team independently of the game's characteristics. Based on their stats, the resulting agents are allocated in a behavioural space, which is used to identify behaviour-type agents. The agents are portable between levels and facilitate diverse automated gameplay. They can detect design flaws and bugs when introducing modifications to the game or trigger external development tools without having to play the game manually
    corecore