10 research outputs found

    Generating Levels That Teach Mechanics

    Get PDF
    The automatic generation of game tutorials is a challenging AI problem. While it is possible to generate annotations and instructions that explain to the player how the game is played, this paper focuses on generating a gameplay experience that introduces the player to a game mechanic. It evolves small levels for the Mario AI Framework that can only be beaten by an agent that knows how to perform specific actions in the game. It uses variations of a perfect A* agent that are limited in various ways, such as not being able to jump high or see enemies, to test how failing to do certain actions can stop the player from beating the level.Comment: 8 pages, 7 figures, PCG Workshop at FDG 2018, 9th International Workshop on Procedural Content Generation (PCG2018

    AtDelfi: Automatically Designing Legible, Full Instructions For Games

    Get PDF
    This paper introduces a fully automatic method for generating video game tutorials. The AtDELFI system (AuTomatically DEsigning Legible, Full Instructions for games) was created to investigate procedural generation of instructions that teach players how to play video games. We present a representation of game rules and mechanics using a graph system as well as a tutorial generation method that uses said graph representation. We demonstrate the concept by testing it on games within the General Video Game Artificial Intelligence (GVG-AI) framework; the paper discusses tutorials generated for eight different games. Our findings suggest that a graph representation scheme works well for simple arcade style games such as Space Invaders and Pacman, but it appears that tutorials for more complex games might require higher-level understanding of the game than just single mechanics.Comment: 10 pages, 11 figures, published at Foundations of Digital Games Conference 201

    Game complexity vs strategic depth

    Get PDF
    The notion of complexity and strategic depth within games has been a long- debated topic with many unanswered questions. How exactly do you measure the complexity of a game? How do you quantify its strategic depth objectively? This seminar answered neither of these questions but instead presents the opinion that these properties are, for the most part, subjective to the human or agent that is playing them. What is complex or deep for one player may be simple or shallow for another. Despite this, determining generally applicable measures for estimating the complexity and depth of a given game (either independently or comparatively), relative to the abilities of a given player or player type, can provide several bene ts for game designers and researchers.peer-reviewe

    Hyperstate space graphs for automated game analysis

    Get PDF
    Automatically analysing games is an important challenge for automated game design, general game playing, and co-creative game design tools. However, understanding the nature of an unseen game is extremely difficult due to the lack of a priori design knowledge and heuristics. In this paper we formally define hyperstate space graphs, a compressed form of state space graphs which can be constructed without any prior design knowledge about a game. We show how hyperstate space graphs produce compact representations of games which closely relate to the heuristics designed by hand for search-based AI agents; we show how hyperstate space graphs also relate to modern ideas about game design; and we point towards future applications for hyperstates across game AI research

    Extracting tactics learned from self-play in general games

    Get PDF
    Local, spatial state-action features can be used to effectively train linear policies from self-play in a wide variety of board games. Such policies can play games directly, or be used to bias tree search agents. However, the resulting feature sets can be large, with a significant amount of overlap and redundancies between features. This is a problem for two reasons. Firstly, large feature sets can be computationally expensive, which reduces the playing strength of agents based on them. Secondly, redundancies and correlations between features impair the ability for humans to analyse, interpret, or understand tactics learned by the policies. We look towards decision trees for their ability to perform feature selection, and serve as interpretable models. Previous work on distilling policies into decision trees uses states as inputs, and distributions over the complete action space as outputs. In contrast, we propose and evaluate a variety of decision tree types, which take state-action pairs as inputs, and provide various different types of outputs on a per-action basis. An empirical evaluation over 43 different board games is presented, and two of those games are used as case studies where we attempt to interpret the discovered features

    Uncertainty handling in surrogate assisted optimisation of games

    Get PDF
    In this thesis entitled Uncertainty handling in surrogate assisted optimisation of games, we started out with the goal to investigate the uncertainty in game optimisation problems, as well as to identify or develop suitable optimisation algorithms. In order to approach this problem systematically, we first created a benchmark consisting of suitable game optimisation functions (GBEA). The suitability of these functions was determined using a taxonomy that was created based on the results of a literature survey of automatic game evaluation approaches. In order to improve the interpretability of the results, we also implemented an experimental framework that adds several features aiding the analysis of the results, specifically for surrogate-assisted evolutionary algorithms. After describing potentially suitable algorithms, we proposed a promising algorithm (SAPEO), to be tested on the benchmark alongside state-of-the-art optimisation algorithms. SAPEO is utilising the observation that most evolutionary algorithms only need fitness evaluations for survival selections. However, if the individuals in a population can be distinguished reliably based on predicted values, the number of function evaluations can be reduced. After a theoretical analysis of the performance limits of SAPEO, which produced very promising insights, we conducted several sets of experiments in order to answer the three central hypotheses guiding this thesis. We find that SAPEO performs comparably to state-of-the-art surrogate-assisted algorithms, but all are frequently outperformed by stand-alone evolutionary algorithms. From a more detailed analysis of the behaviour of SAPEO, we identify a few pointers that could help to further improve the performance. Before running experiments on the developed benchmark, we first verify its suitability using a second set of experiments. We find that GBEA is practical and contains interesting and challenging functions. However, we also discover that, in order to produce interpretable result with the benchmark, a set of baseline results is required. Due to this issue, we are not able to produce meaningful results with the GBEA at the time of writing. However, after more experiments are conducted with the benchmark, we will be able to interpret our results in the future. The insights developed will most likely not only be able to provide an assessment of optimisation algorithms, but can also be used to gain a deeper understanding of the characteristics of game optimisation problems
    corecore