5 research outputs found

    Automated Video Game Testing Using Synthetic and Human-Like Agents

    Get PDF
    In this paper, we present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents -synthetic and human-like- and two distinct approaches to create them. Our agents are derived from Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) agents, but focus on finding defects. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The human-like agent uses test goals extracted by our proposed multiple greedy-policy inverse reinforcement learning (MGP-IRL) algorithm from tester trajectories. MGPIRL captures multiple policies executed by human testers. These testers' aims are finding defects while interacting with the game to break it, which is considerably different from game playing. We present interaction states to model such interactions. We use our agents to produce test sequences, run the game with these sequences, and check the game for each run with an automated test oracle. We analyze the proposed method in two parts: we compare the success of human-like and synthetic agents in bug finding, and we evaluate the similarity between humanlike agents and human testers. We collected 427 trajectories from human testers using the General Video Game Artificial Intelligence (GVG-AI) framework and created three games with 12 levels that contain 45 bugs. Our experiments reveal that human-like and synthetic agents compete with human testers' bug finding performances. Moreover, we show that MGP-IRL increases the human-likeness of agents while improving the bug finding performance

    Enhancing the Monte Carlo Tree Search Algorithm for Video Game Testing

    Full text link
    In this paper, we study the effects of several Monte Carlo Tree Search (MCTS) modifications for video game testing. Although MCTS modifications are highly studied in game playing, their impacts on finding bugs are blank. We focused on bug finding in our previous study where we introduced synthetic and human-like test goals and we used these test goals in Sarsa and MCTS agents to find bugs. In this study, we extend the MCTS agent with several modifications for game testing purposes. Furthermore, we present a novel tree reuse strategy. We experiment with these modifications by testing them on three testbed games, four levels each, that contain 45 bugs in total. We use the General Video Game Artificial Intelligence (GVG-AI) framework to create the testbed games and collect 427 human tester trajectories using the GVG-AI framework. We analyze the proposed modifications in three parts: we evaluate their effects on bug finding performances of agents, we measure their success under two different computational budgets, and we assess their effects on human-likeness of the human-like agent. Our results show that MCTS modifications improve the bug finding performance of the agents

    Dynamic difficulty adjustment of serious-game based on synthetic fog using activity theory model

    Get PDF
    This study used the activity theory model to determine the dynamic difficulty adjustment of serious-game based on synthetic fog. The difference in difficulty levels was generated in a 3-dimensional game environment with changes determined by applying varying fog thickness. The activity theory model in serious-games aims to facilitate development analysis in terms of learning content, the equipment used, and the resulting in-game action. The difficulty levels vary according to the player's ability because the game is expected to reduce boredom and frustration. Furthermore, this study simulated scenarios of various conditions, scores, time remaining, and the lives of synthetic players. The experimental results showed that the system can change the game environment with different fog thicknesses according to synthetic player parameters

    Automated Video Game Testing Using Synthetic and Human-Like Agents

    No full text

    Believability Assessment and Modelling in Video Games

    Get PDF
    Artificial Intelligence remains one of the most sought after subjects in computer science to this day. One of its subjects, and the focus of this thesis, is its application to video games as believable agents. This means focusing on implementing agents that behave like us rather than simply attempting to win, whether that means cooperating or competing like we do. Success in building more human-like characters can enhance immersion and enjoyment in games, thus potentially increasing its gameplay value. Ultimately, bringing benefits to the industry and academia. However, believability is a hard concept to define. It depends on how and what one considers to be ``believable'', which is often very subjective. This means that developing believable agents remains a sought out, albeit difficult, challenge. There are many approaches to development ranging from finite state machines or imitation learning to emotional models, with no single solution to creating a human-like agent. This problems remains when attempting to assess these solutions as well. Assessing the believability of agents, characters and simulated actors is also a core challenge for human-like behaviour. While numerous approaches are suggested in the literature, there is not a dominant solution for evaluation either. In addition, assessment rarely receives as much attention as development or modelling do. Mostly, it comes as a necessity of evaluating agents rather than focusing on how its process could affect the outcome of the evaluation itself. This thesis takes a different approach to developing believability and its assessment. For starters, it explores assessment first. In previous years, several researchers have tried to find ways of assessing human-like behaviour in games through adaptations of Turing Tests on their agents. Given the small pool of diversity of the explored parameters in believability assessment and a focus on programming the bots, this thesis starts by exploring different parameters for evaluating believability in video games. The objective of this work is to analyze the different ways believability can be assessed, for humans and non-player characters (NPCs) by comparing how results between them and scores are affected in both when changing the parameters. This thesis also explores the concept of believability and its need in video games in general. Another aspect of assessment explored in this thesis is believability's overall representation. Past research shows methodologies being limited to discrete and low-granularity representations of believable behaviour. This work will focus, for the first time, in viewing believability as a time-continuous phenomenon and explore the suitability of two different affect annotation schemes for its assessment. These techniques are also compared to previously used discrete methodologies, to understand how moment-to-moment assessment can contribute to these. In addition, this thesis studies the degree to which we can predict character believability in a continuous fashion. This is achieved by training random forest models to predict believability based on annotations of the context extracted from a game. It is then that this thesis tackles development. For this work, different solutions are combined into one and in a different order: this time-continuous data based on peoples' assessment of believability is modelled and integrated into a game agent to affect its behaviour. This results in a final comparison between two agents, where one uses a believability biased model and the other does not. Showing that biasing agents' behaviour with assessment data can increase their overall believability
    corecore