152 research outputs found

    Deep learning for video game playing

    Get PDF
    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards

    Neuroevolution in Games: State of the Art and Open Challenges

    Get PDF
    This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyse the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The article also highlights important open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table (Table 1

    Efficient Evolution of Neural Networks

    Get PDF
    This thesis addresses the study of evolutionary methods for the synthesis of neural network controllers. Chapter 1 introduces the research area, reviews the state of the art, discusses promising research directions, and presents the two major scientific objectives of the thesis. The first objective, which is covered in Chapter 2, is to verify the efficacy of some of the most promising neuro-evolutionary methods proposed in the literature, including two new methods that I elaborated. This has been made by designing extended version of the double-pole balancing problem, which can be used to more properly benchmark alternative algorithms, by studying the effect of critical parameters, and by conducting several series of comparative experiments. The obtained results indicate that some methods perform better with respect to all the considered criteria, i.e. performance, robustness to environmental variations and capability to scale-up to more complex problems. The second objective, which is targeted in Chapter 3, consists in the design of a new hybrid algorithm that combines evolution and learning by demonstration. The combination of these two processes is appealing since it potentially allows the adaptive agent to exploit a richer training feedback constituted by both a scalar performance objective (reinforcement signal or fitness measure) and a detailed description of a suitable behaviour (demonstration). The proposed method has been successfully evaluated on two qualitatively different robotic problems. Chapter 4 summarizes the results obtained and describes the major contributions of the thesis

    Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents

    Full text link
    Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. However, many RL problems require directed exploration because they have reward functions that are sparse or deceptive (i.e. contain local optima), and it is unknown how to encourage such exploration with ES. Here we show that algorithms that have been invented to promote directed exploration in small-scale evolved neural networks via populations of exploring agents, specifically novelty search (NS) and quality diversity (QD) algorithms, can be hybridized with ES to improve its performance on sparse or deceptive deep RL tasks, while retaining scalability. Our experiments confirm that the resultant new algorithms, NS-ES and two QD algorithms, NSR-ES and NSRA-ES, avoid local optima encountered by ES to achieve higher performance on Atari and simulated robots learning to walk around a deceptive trap. This paper thus introduces a family of fast, scalable algorithms for reinforcement learning that are capable of directed exploration. It also adds this new family of exploration algorithms to the RL toolbox and raises the interesting possibility that analogous algorithms with multiple simultaneous paths of exploration might also combine well with existing RL algorithms outside ES

    Evolving Intelligent Multimodal Gameplay Agents and Decision Makers with Neuroevolution

    Get PDF
    �Super Mario Bros� is a difficult platforming game that requires the use of multiple behavioral modes to complete different gameplay elements such as: collecting coins, dodging enemies and getting to the end of the level. Methods for creating intelligent game playing agents have previously used human designed behavior policy for each gameplay state or by combining gameplay goals into a single task to be learned. This thesis assesses the development and method of training machines to promote multiple modes of behavior within neural network controllers. These controllers utilize the concept of evolution through multi-objective optimization for the test bench platform game system �MarioAI�. Artificial neural networks were evolved to exhibit complex and multimodal behavior using multiple sub objectives of the game; and thus overcome the non-linear, noisy, and fractured game environment. Experiments were conducted with the purpose of creating multiple Pareto-optimal solutions of quality with differing behavioral aspects. These solutions were then discerned by a Decision Maker Neural Network Ensemble that had been evolved to pick the best solution according to game level. This Decision Maker Ensemble proved to be able to learn on minimal information and provide the highest overall game score. The results of this thesis show that it�s possible to train agents on sub objectives to teach multiple forms of complex behavior that can then be abstractly chosen by an evolved Decision Maker to provide a better outcome than agents that were trained specifically towards that single solution.Electrical Engineerin

    Boosting computational creativity with human interaction in mixed-initiative co-creation tasks

    Get PDF
    Research in computational creativity often focuses on autonomously creative systems, which incorporate creative processes and result in creative outcomes. However, the integration of artificially intelligent processes in human-computer interaction tools necessitates that we identify how computational creativity can be shaped and ultimately enhanced by human intervention. This paper attempts to connect mixed-initiative design with established theories of computational creativity, and adapt the latter to accommodate a human initiative impacting computationally creative processes and outcomes. Several case studies of mixed-initiative tools for design and play are used to corroborate the arguments in this paper.peer-reviewe
    corecore