72 research outputs found

    Neuroevolution in Games: State of the Art and Open Challenges

    Get PDF
    This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyse the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The article also highlights important open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table (Table 1

    Controller for TORCS created by imitation

    Get PDF
    Proceeding of: IEEE Symposium on Computational Intelligence and Games, 2009. CIG 2009, september 7-10, 2009, Milano, ItalyThis paper is an initial approach to create a controller for the game TORCS by learning how another controller or humans play the game. We used data obtained from two controllers and from one human player. The first controller is the winner of the WCCI 2008 Simulated Car Racing Competition, and the second one is a hand coded controller that performs a complete lap in all tracks. First, each kind of controller is imitated separately, then a mix of the data is used to create new controllers. The imitation is performed by means of training a feed forward neural network with the data, using the backpropagation algorithm for learning.This work was supported in part by the University Carlos III of Madrid under grant PIF UC3M01-0809 and by the Ministry of Science and Innovation under project TRA2007- 67374-C02-02

    Neuroevolutionary reinforcement learning for generalized control of simulated helicopters

    Get PDF
    This article presents an extended case study in the application of neuroevolution to generalized simulated helicopter hovering, an important challenge problem for reinforcement learning. While neuroevolution is well suited to coping with the domain’s complex transition dynamics and high-dimensional state and action spaces, the need to explore efficiently and learn on-line poses unusual challenges. We propose and evaluate several methods for three increasingly challenging variations of the task, including the method that won first place in the 2008 Reinforcement Learning Competition. The results demonstrate that (1) neuroevolution can be effective for complex on-line reinforcement learning tasks such as generalized helicopter hovering, (2) neuroevolution excels at finding effective helicopter hovering policies but not at learning helicopter models, (3) due to the difficulty of learning reliable models, model-based approaches to helicopter hovering are feasible only when domain expertise is available to aid the design of a suitable model representation and (4) recent advances in efficient resampling can enable neuroevolution to tackle more aggressively generalized reinforcement learning tasks

    Gene regulated car driving: using a gene regulatory network to drive a virtual car

    Get PDF
    This paper presents a virtual racing car controller based on an artificial gene regulatory network. Usually used to control virtual cells in developmental models, recent works showed that gene regulatory networks are also capable to control various kinds of agents such as foraging agents, pole cart, swarm robots, etc. This paper details how a gene regulatory network is evolved to drive on any track through a three-stages incremental evolution. To do so, the inputs and outputs of the network are directly mapped to the car sensors and actuators. To make this controller a competitive racer, we have distorted its inputs online to make it drive faster and to avoid opponents. Another interesting property emerges from this approach: the regulatory network is naturally resistant to noise. To evaluate this approach, we participated in the 2013 simulated racing car competition against eight other evolutionary and scripted approaches. After its first participation, this approach finished in third place in the competition

    Evolution of Neural Networks for Helicopter Control: Why Modularity Matters

    Get PDF
    The problem of the automatic development of controllers for vehicles for which the exact characteristics are not known is considered in the context of miniature helicopter flocking. A methodology is proposed in which neural network based controllers are evolved in a simulation using a dynamic model qualitatively similar to the physical helicopter. Several network architectures and evolutionary sequences are investigated, and two approaches are found that can evolve very competitive controllers. The division of the neural network into modules and of the task into incremental steps seems to be a precondition for success, and we analyse why this might be so

    A human-like TORCS controller for the Simulated Car Racing Championship

    Get PDF
    Proceeding of: IEEE Congres on Computational Intelligence and Games (CIG'10), Copenhagen (Denmark), 18-21, August, 2010.This paper presents a controller for the 2010 Simulated Car Racing Championship. The idea is not to create the fastest controller but a human-like controller. In order to achieve this, first we have created a process to build a model of the tracks while the car is running and then we used several neural networks which predict the trajectory the car should follow and the target speed. A scripted policy is used for the gear change and to follow the predicted trajectory with the predicted speed. The neural networks are trained with data retrieved from a human player, and are evaluated in a new track. The results shows an acceptable performance of the controller in unknown tracks, more than 20% slower than the human in the same tracks because of the mistakes made when the controller tries to follow the trajectory.This work was supported in part by the University Carlos III of Madrid under grant PIF UC3M01-0809 and by the Ministry of Science and Innovation under project TRA2007- 67374-C02-02

    Deep learning for video game playing

    Get PDF
    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards
    corecore