674 research outputs found
Arms races and car races
Evolutionary car racing (ECR) is extended to the case of two cars racing on the same track. A sensor representation is devised, and various methods of evolving car controllers for competitive racing are explored. ECR can be combined with co-evolution in a wide variety of ways, and one aspect which is explored here is the relative-absolute fitness continuum. Systematical behavioural differences are found along this continuum; further, a tendency to specialization and the reactive nature of the controller architecture are found to limit evolutionary progress
Making Racing Fun Through Player Modeling and Track Evolution
This paper addresses the problem of automatically constructing tracks tailor-made to maximize the enjoyment of individual players in a simple car racing game. To this end, some approaches to player modeling are investigated, and a method of using evolutionary algorithms to construct racing tracks is presented. A simple player-dependent metric of entertainment is proposed and used as the fitness function when evolving tracks. We conclude that accurate player modeling poses some significant challenges, but track evolution works well given the right track representation
Evolving a rule system controller for automatic driving in a car racing competition
IEEE Symposium on Computational Intelligence and Games. Perth, Australia, 15-18 December 2008.The techniques and the technologies supporting Automatic Vehicle Guidance are important issues. Automobile manufacturers view automatic driving as a very interesting
product with motivating key features which allow improvement of the car safety, reduction in emission or fuel consumption or
optimization of driver comfort during long journeys. Car racing is an active research field where new advances in aerodynamics,
consumption and engine power are critical each season. Our proposal is to research how evolutionary computation techniques can help in this field. For this work we have designed an automatic controller that learns rules with a genetic algorithm.
This paper is a report of the results obtained by this controller during the car racing competition held in Hong Kong during the IEEE World Congress on Computational Intelligence (WCCI 2008).Publicad
Gene regulated car driving: using a gene regulatory network to drive a virtual car
This paper presents a virtual racing car controller based on an artificial gene regulatory network. Usually used to control virtual cells in developmental models, recent works showed that gene regulatory networks are also capable to control various kinds of agents such as foraging agents, pole cart, swarm robots, etc. This paper details how a gene regulatory network is evolved to drive on any track through a three-stages incremental evolution. To do so, the inputs and outputs of the network are directly mapped to the car sensors and actuators. To make this controller a competitive racer, we have distorted its inputs online to make it drive faster and to avoid opponents. Another interesting property emerges from this approach: the regulatory network is naturally resistant to noise. To evaluate this approach, we participated in the 2013 simulated racing car competition against eight other evolutionary and scripted approaches. After its first participation, this approach finished in third place in the competition
The 2007 IEEE CEC simulated car racing competition
This paper describes the simulated car racing competition that was arranged as part of the 2007 IEEE Congress on Evolutionary Computation. Both the game that was used as the domain for the competition, the controllers submitted as entries to the competition and its results are presented. With this paper, we hope to provide some insight into the efficacy of various computational intelligence methods on a well-defined game task, as well as an example of one way of running a competition. In the process, we provide a set of reference results for those who wish to use the simplerace game to benchmark their own algorithms. The paper is co-authored by the organizers and participants of the competitio
Learning to Race through Coordinate Descent Bayesian Optimisation
In the automation of many kinds of processes, the observable outcome can
often be described as the combined effect of an entire sequence of actions, or
controls, applied throughout its execution. In these cases, strategies to
optimise control policies for individual stages of the process might not be
applicable, and instead the whole policy might have to be optimised at once. On
the other hand, the cost to evaluate the policy's performance might also be
high, being desirable that a solution can be found with as few interactions as
possible with the real system. We consider the problem of optimising control
policies to allow a robot to complete a given race track within a minimum
amount of time. We assume that the robot has no prior information about the
track or its own dynamical model, just an initial valid driving example.
Localisation is only applied to monitor the robot and to provide an indication
of its position along the track's centre axis. We propose a method for finding
a policy that minimises the time per lap while keeping the vehicle on the track
using a Bayesian optimisation (BO) approach over a reproducing kernel Hilbert
space. We apply an algorithm to search more efficiently over high-dimensional
policy-parameter spaces with BO, by iterating over each dimension individually,
in a sequential coordinate descent-like scheme. Experiments demonstrate the
performance of the algorithm against other methods in a simulated car racing
environment.Comment: Accepted as conference paper for the 2018 IEEE International
Conference on Robotics and Automation (ICRA
A human-like TORCS controller for the Simulated Car Racing Championship
Proceeding of: IEEE Congres on Computational Intelligence and Games (CIG'10), Copenhagen (Denmark), 18-21, August, 2010.This paper presents a controller for the 2010 Simulated Car Racing Championship. The idea is not to create the fastest controller but a human-like controller. In order to achieve this, first we have created a process to build a model of the tracks while the car is running and then we used several neural networks which predict the trajectory the car should follow and the target speed. A scripted policy is used for the gear change and to follow the predicted trajectory with the predicted speed. The neural networks are trained with data retrieved from a human player, and are evaluated in a new track. The results shows an acceptable performance of the controller in unknown tracks, more than 20% slower than the human in the same tracks because of the mistakes made when the controller tries to follow the trajectory.This work was supported in part by the University Carlos III of Madrid under grant PIF UC3M01-0809 and by the Ministry of Science and Innovation under project TRA2007-
67374-C02-02
Enhancing player experience in computer games: A computational Intelligence approach.
Ph.DDOCTOR OF PHILOSOPH
Proceedings of the SAB'06 Workshop on Adaptive Approaches for Optimizing Player Satisfaction in Computer and Physical Games
These proceedings contain the papers presented at the Workshop on Adaptive approaches
for Optimizing Player Satisfaction in Computer and Physical Games held at the Ninth
international conference on the Simulation of Adaptive Behavior (SAB’06): From
Animals to Animats 9 in Rome, Italy on 1 October 2006.
We were motivated by the current state-of-the-art in intelligent game design using
adaptive approaches. Artificial Intelligence (AI) techniques are mainly focused on
generating human-like and intelligent character behaviors. Meanwhile there is generally
little further analysis of whether these behaviors contribute to the satisfaction of the
player. The implicit hypothesis motivating this research is that intelligent opponent
behaviors enable the player to gain more satisfaction from the game. This hypothesis may
well be true; however, since no notion of entertainment or enjoyment is explicitly
defined, there is therefore little evidence that a specific character behavior generates
enjoyable games.
Our objective for holding this workshop was to encourage the study, development,
integration, and evaluation of adaptive methodologies based on richer forms of humanmachine
interaction for augmenting gameplay experiences for the player. We wanted to
encourage a dialogue among researchers in AI, human-computer interaction and
psychology disciplines who investigate dissimilar methodologies for improving gameplay
experiences. We expected that this workshop would yield an understanding of state-ofthe-
art approaches for capturing and augmenting player satisfaction in interactive systems
such as computer games.
Our invited speaker was Hakon Steinø, Technical Producer of IO-Interactive, who
discussed applied AI research at IO-Interactive, portrayed the future trends of AI in
computer game industry and debated the use of academic-oriented methodologies for
augmenting player satisfaction. The sessions of presentations and discussions where
classified into three themes: Adaptive Learning, Examples of Adaptive Games and Player
Modeling.
The Workshop Committee did a great job in providing suggestions and informative
reviews for the submissions; thank you! This workshop was in part supported by the
Danish National Research Council (project no: 274-05-0511). Finally, thanks to all the
participants; we hope you found this to be useful!peer-reviewe
- …