10,762 research outputs found

    Adapting In-Game Agent Behavior by Observation of Players Using Learning Behavior Trees

    Get PDF
    In this paper we describe Learning Behavior Trees, an extension of the popular game AI scripting technique. Behavior Trees provide an effective way for expert designers to describe complex, in-game agent behaviors. Scripted AI captures human intuition about the structure of behavioral decisions, but suffers from brittleness and lack of the natural variation seen in human players. Learning Behavior Trees are designed by a human designer, but then are trained by observation of players performing the same role, to introduce human-like variation to the decision structure. We show that, using this model, a single hand-designed Behavior Tree can cover a wide variety of player behavior variations in a simplified Massively Multiplayer Online Role-Playing Game

    Automated Game Design Learning

    Full text link
    While general game playing is an active field of research, the learning of game design has tended to be either a secondary goal of such research or it has been solely the domain of humans. We propose a field of research, Automated Game Design Learning (AGDL), with the direct purpose of learning game designs directly through interaction with games in the mode that most people experience games: via play. We detail existing work that touches the edges of this field, describe current successful projects in AGDL and the theoretical foundations that enable them, point to promising applications enabled by AGDL, and discuss next steps for this exciting area of study. The key moves of AGDL are to use game programs as the ultimate source of truth about their own design, and to make these design properties available to other systems and avenues of inquiry.Comment: 8 pages, 2 figures. Accepted for CIG 201

    Adding Neural Network Controllers to Behavior Trees without Destroying Performance Guarantees

    Full text link
    In this paper, we show how Behavior Trees that have performance guarantees, in terms of safety and goal convergence, can be extended with components that were designed using machine learning, without destroying those performance guarantees. Machine learning approaches such as reinforcement learning or learning from demonstration can be very appealing to AI designers that want efficient and realistic behaviors in their agents. However, those algorithms seldom provide guarantees for solving the given task in all different situations while keeping the agent safe. Instead, such guarantees are often easier to find for manually designed model based approaches. In this paper we exploit the modularity of Behavior trees to extend a given design with an efficient, but possibly unreliable, machine learning component in a way that preserves the guarantees. The approach is illustrated with an inverted pendulum example.Comment: Submitted to IEEE Transactions on Game

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    MIMICA: A GENERAL FRAMEWORK FOR SELF-LEARNING COMPANION AI BEHAVIOR

    Get PDF
    Companion or support characters controlled by Artificial Intelligence (AI) have been a feature of video games for decades. Many Role Playing Games (RPGs) offer a cast of support characters in the player’s party that are AI-controlled to various degrees. Many First Person Shooter (FPS) games include semi-autonomous or fully autonomous AI-controlled companions. Real Time Strategy (RTS) games have traditionally featured large numbers of semi-autonomous characters that collectively help accomplish various tasks (build, attack, etc.) for the player. While RPGs tend to focus on a single or a small number of well-developed character companions to accompany a player controlled main character, the RTS games tend to have anonymous and replaceable workers and soldiers to be micromanaged by the player. In this paper we present the MimicA framework, designed to govern AI companion behavior based on mimicking that of the player. Several features set this system apart from existing practices in AI-managed companions in contemporary RPG or RTS games. First, the behavior generated is designed to be fully autonomous, not partially autonomous as in most RTS games. Second, the solution is general. No specific prior behavior specifications are modeled. As a result, little to no genre, story or technical assumptions are necessary to implement this solution. Even the list of possible actions required is generalized. The system is designed to work independently of game representation. We further demonstrate, analyze and discuss MimicA by using it in Lord of Towers, a novel tower defense game featuring a player avatar. Through our user study we show that a majority of participants found the companions useful to them and liked the idea of this type of framework

    Programming Robosoccer agents by modelling human behavior

    Get PDF
    The Robosoccer simulator is a challenging environment for artificial intelligence, where a human has to program a team of agents and introduce it into a soccer virtual environment. Most usually, Robosoccer agents are programmed by hand. In some cases, agents make use of Machine learning (ML) to adapt and predict the behavior of the opposite team, but the bulk of the agent has been preprogrammed. The main aim of this paper is to transform Robosoccer into an interactive game and let a human control a Robosoccer agent. Then ML techniques can be used to model his/her behavior from training instances generated during the play. This model will be used later to control a Robosoccer agent, thus imitating the human behavior. We have focused our research on low-level behavior, like looking for the ball, conducting the ball towards the goal, or scoring in the presence of opponent players. Results have shown that indeed, Robosoccer agents can be controlled by programs that model human play.Publicad

    QL-BT: Enhancing Behaviour Tree Design and Implementation with Q-Learning

    Get PDF
    Artificial intelligence has become an increasingly important aspect of computer game technology, as designers attempt to deliver engaging experiences for players by creating characters with behavioural realism to match advances in graphics and physics. Recently, behaviour trees have come to the forefront of games AI technology, providing a more intuitive approach than previous techniques such as hierarchical state machines, which often required complex data structures producing poorly structured code when scaled up. The design and creation of behaviour trees, however, requires experienceand effort. This research introduces Q-learning behaviour trees (QL-BT), a method for the application of reinforcement learning to behaviour tree design. The technique facilitates AI designers' use of behaviour trees by assisting them in identifying the most appropriate moment to execute each branch of AI logic, as well as providing an implementation that can be used to debug, analyse and optimize early behaviour tree prototypes. Initial experiments demonstrate that behaviour trees produced by the QL-BT algorithm effectively integrate RL, automate tree design, and are human-readable
    • …
    corecore