652 research outputs found

    Evolutionary Behavior Tree Approaches for Navigating Platform Games

    Get PDF
    Computer games are highly dynamic environments, where players are faced with a multitude of potentially unseen scenarios. In this article, AI controllers are applied to the Mario AI Benchmark platform, by using the Grammatical Evolution system to evolve Behavior Tree structures. These controllers are either evolved to both deal with navigation and reactiveness to elements of the game, or used in conjunction with a dynamic A* approach. The results obtained highlight the applicability of Behavior Trees as representations for evolutionary computation, and their flexibility for incorporation of diverse algorithms to deal with specific aspects of bot control in game environments

    Adding Neural Network Controllers to Behavior Trees without Destroying Performance Guarantees

    Full text link
    In this paper, we show how Behavior Trees that have performance guarantees, in terms of safety and goal convergence, can be extended with components that were designed using machine learning, without destroying those performance guarantees. Machine learning approaches such as reinforcement learning or learning from demonstration can be very appealing to AI designers that want efficient and realistic behaviors in their agents. However, those algorithms seldom provide guarantees for solving the given task in all different situations while keeping the agent safe. Instead, such guarantees are often easier to find for manually designed model based approaches. In this paper we exploit the modularity of Behavior trees to extend a given design with an efficient, but possibly unreliable, machine learning component in a way that preserves the guarantees. The approach is illustrated with an inverted pendulum example.Comment: Submitted to IEEE Transactions on Game

    Learning Behavior Trees with Genetic Programming in Unpredictable Environments

    Full text link
    Modern industrial applications require robots to be able to operate in unpredictable environments, and programs to be created with a minimal effort, as there may be frequent changes to the task. In this paper, we show that genetic programming can be effectively used to learn the structure of a behavior tree (BT) to solve a robotic task in an unpredictable environment. Moreover, we propose to use a simple simulator for the learning and demonstrate that the learned BTs can solve the same task in a realistic simulator, reaching convergence without the need for task specific heuristics. The learned solution is tolerant to faults, making our method appealing for real robotic applications

    QL-BT: Enhancing Behaviour Tree Design and Implementation with Q-Learning

    Get PDF
    Artificial intelligence has become an increasingly important aspect of computer game technology, as designers attempt to deliver engaging experiences for players by creating characters with behavioural realism to match advances in graphics and physics. Recently, behaviour trees have come to the forefront of games AI technology, providing a more intuitive approach than previous techniques such as hierarchical state machines, which often required complex data structures producing poorly structured code when scaled up. The design and creation of behaviour trees, however, requires experienceand effort. This research introduces Q-learning behaviour trees (QL-BT), a method for the application of reinforcement learning to behaviour tree design. The technique facilitates AI designers' use of behaviour trees by assisting them in identifying the most appropriate moment to execute each branch of AI logic, as well as providing an implementation that can be used to debug, analyse and optimize early behaviour tree prototypes. Initial experiments demonstrate that behaviour trees produced by the QL-BT algorithm effectively integrate RL, automate tree design, and are human-readable

    The Mario AI Benchmark and Competitions

    Full text link
    • …
    corecore