44 research outputs found

    Generating Levels That Teach Mechanics

    Get PDF
    The automatic generation of game tutorials is a challenging AI problem. While it is possible to generate annotations and instructions that explain to the player how the game is played, this paper focuses on generating a gameplay experience that introduces the player to a game mechanic. It evolves small levels for the Mario AI Framework that can only be beaten by an agent that knows how to perform specific actions in the game. It uses variations of a perfect A* agent that are limited in various ways, such as not being able to jump high or see enemies, to test how failing to do certain actions can stop the player from beating the level.Comment: 8 pages, 7 figures, PCG Workshop at FDG 2018, 9th International Workshop on Procedural Content Generation (PCG2018

    Ensemble decision systems for general video game playing

    Get PDF
    Ensemble Decision Systems offer a unique form of decision making that allows a collection of algorithms to reason together about a problem. Each individual algorithm has its own inherent strengths and weaknesses, and often it is difficult to overcome the weaknesses while retaining the strengths. Instead of altering the properties of the algorithm, the Ensemble Decision System augments the performance with other algorithms that have complementing strengths. This work outlines different options for building an Ensemble Decision System as well as providing analysis on its performance compared to the individual components of the system with interesting results, showing an increase in the generality of the algorithms without significantly impeding performance.Comment: 8 Pages, Accepted at COG201

    Shallow decision-making analysis in General Video Game Playing

    Full text link
    The General Video Game AI competitions have been the testing ground for several techniques for game playing, such as evolutionary computation techniques, tree search algorithms, hyper heuristic based or knowledge based algorithms. So far the metrics used to evaluate the performance of agents have been win ratio, game score and length of games. In this paper we provide a wider set of metrics and a comparison method for evaluating and comparing agents. The metrics and the comparison method give shallow introspection into the agent's decision making process and they can be applied to any agent regardless of its algorithmic nature. In this work, the metrics and the comparison method are used to measure the impact of the terms that compose a tree policy of an MCTS based agent, comparing with several baseline agents. The results clearly show how promising such general approach is and how it can be useful to understand the behaviour of an AI agent, in particular, how the comparison with baseline agents can help understanding the shape of the agent decision landscape. The presented metrics and comparison method represent a step toward to more descriptive ways of logging and analysing agent's behaviours

    AI Researchers, Video Games Are Your Friends!

    Full text link
    If you are an artificial intelligence researcher, you should look to video games as ideal testbeds for the work you do. If you are a video game developer, you should look to AI for the technology that makes completely new types of games possible. This chapter lays out the case for both of these propositions. It asks the question "what can video games do for AI", and discusses how in particular general video game playing is the ideal testbed for artificial general intelligence research. It then asks the question "what can AI do for video games", and lays out a vision for what video games might look like if we had significantly more advanced AI at our disposal. The chapter is based on my keynote at IJCCI 2015, and is written in an attempt to be accessible to a broad audience.Comment: in Studies in Computational Intelligence Studies in Computational Intelligence, Volume 669 2017. Springe

    Characteristics of generatable games

    Get PDF
    We address the problem of generating complete games, rather than content for existing games. In particular, we try to an- swer the question which types of games it would be realistic or even feasible to generate. To begin to answer the question, we rst list the di erent ways we see that games could be generated, and then try to discuss what characterises games that would be comparatively easy or hard to generate. The discussion is structured according to a subset of the charac- teristics discussed in the book Characteristics of Games by Elias, Gar eld and Gutschera.peer-reviewe

    Evaluation of a Recommender System for Assisting Novice Game Designers

    Full text link
    Game development is a complex task involving multiple disciplines and technologies. Developers and researchers alike have suggested that AI-driven game design assistants may improve developer workflow. We present a recommender system for assisting humans in game design as well as a rigorous human subjects study to validate it. The AI-driven game design assistance system suggests game mechanics to designers based on characteristics of the game being developed. We believe this method can bring creative insights and increase users' productivity. We conducted quantitative studies that showed the recommender system increases users' levels of accuracy and computational affect, and decreases their levels of workload.Comment: The 15th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE 19
    corecore