769 research outputs found
EvoTanks: co-evolutionary development of game-playing agents
This paper describes the EvoTanks research project, a continuing attempt to develop strong AI players for a primitive 'Combat' style video game using evolutionary computational methods with artificial neural networks. A small but challenging feat due to the necessity for agent's actions to rely heavily on opponent behaviour. Previous investigation has shown the agents are capable of developing high performance behaviours by evolving against scripted opponents; however these are local to the trained opponent. The focus of this paper shows results from the use of co-evolution on the same population. Results show agents no longer succumb to trappings of local maxima within the search space and are capable of converging on high fitness behaviours local to their population without the use of scripted opponents
Deep learning for video game playing
In this article, we review recent Deep Learning advances in the context of
how they have been applied to play different types of video games such as
first-person shooters, arcade games, and real-time strategy games. We analyze
the unique requirements that different game genres pose to a deep learning
system and highlight important open challenges in the context of applying these
machine learning methods to video games, such as general game playing, dealing
with extremely large decision spaces and sparse rewards
Neuroevolution in Games: State of the Art and Open Challenges
This paper surveys research on applying neuroevolution (NE) to games. In
neuroevolution, artificial neural networks are trained through evolutionary
algorithms, taking inspiration from the way biological brains evolved. We
analyse the application of NE in games along five different axes, which are the
role NE is chosen to play in a game, the different types of neural networks
used, the way these networks are evolved, how the fitness is determined and
what type of input the network receives. The article also highlights important
open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table
(Table 1
Rolling Horizon NEAT for General Video Game Playing
This paper presents a new Statistical Forward Planning (SFP) method, Rolling
Horizon NeuroEvolution of Augmenting Topologies (rhNEAT). Unlike traditional
Rolling Horizon Evolution, where an evolutionary algorithm is in charge of
evolving a sequence of actions, rhNEAT evolves weights and connections of a
neural network in real-time, planning several steps ahead before returning an
action to execute in the game. Different versions of the algorithm are explored
in a collection of 20 GVGAI games, and compared with other SFP methods and
state of the art results. Although results are overall not better than other
SFP methods, the nature of rhNEAT to adapt to changing game features has
allowed to establish new state of the art records in games that other methods
have traditionally struggled with. The algorithm proposed here is general and
introduces a new way of representing information within rolling horizon
evolution techniques.Comment: 8 pages, 5 figures, accepted for publication in IEEE Conference on
Games (CoG) 202
- âŚ