1,734 research outputs found
Balancing Selection Pressures, Multiple Objectives, and Neural Modularity to Coevolve Cooperative Agent Behavior
Previous research using evolutionary computation in Multi-Agent Systems
indicates that assigning fitness based on team vs.\ individual behavior has a
strong impact on the ability of evolved teams of artificial agents to exhibit
teamwork in challenging tasks. However, such research only made use of
single-objective evolution. In contrast, when a multiobjective evolutionary
algorithm is used, populations can be subject to individual-level objectives,
team-level objectives, or combinations of the two. This paper explores the
performance of cooperatively coevolved teams of agents controlled by artificial
neural networks subject to these types of objectives. Specifically, predator
agents are evolved to capture scripted prey agents in a torus-shaped grid
world. Because of the tension between individual and team behaviors, multiple
modes of behavior can be useful, and thus the effect of modular neural networks
is also explored. Results demonstrate that fitness rewarding individual
behavior is superior to fitness rewarding team behavior, despite being applied
to a cooperative task. However, the use of networks with multiple modules
allows predators to discover intelligent behavior, regardless of which type of
objectives are used
Neuroevolution in Games: State of the Art and Open Challenges
This paper surveys research on applying neuroevolution (NE) to games. In
neuroevolution, artificial neural networks are trained through evolutionary
algorithms, taking inspiration from the way biological brains evolved. We
analyse the application of NE in games along five different axes, which are the
role NE is chosen to play in a game, the different types of neural networks
used, the way these networks are evolved, how the fitness is determined and
what type of input the network receives. The article also highlights important
open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table
(Table 1
Ms Pac-Man versus Ghost Team CEC 2011 competition
Games provide an ideal test bed for computational intelligence and significant progress has been made in recent years, most notably in games such as Go, where the level of play is now competitive with expert human play on smaller boards. Recently, a significantly more complex class of games has received increasing attention: real-time video games. These games pose many new challenges, including strict time constraints, simultaneous moves and open-endedness. Unlike in traditional board games, computational play is generally unable to compete with human players. One driving force in improving the overall performance of artificial intelligence players are game competitions where practitioners may evaluate and compare their methods against those submitted by others and possibly human players as well. In this paper we introduce a new competition based on the popular arcade video game Ms Pac-Man: Ms Pac-Man versus Ghost Team. The competition, to be held at the Congress on Evolutionary Computation 2011 for the first time, allows participants to develop controllers for either the Ms Pac-Man agent or for the Ghost Team and unlike previous Ms Pac-Man competitions that relied on screen capture, the players now interface directly with the game engine. In this paper we introduce the competition, including a review of previous work as well as a discussion of several aspects regarding the setting up of the game competition itself. Β© 2011 IEEE
Novelty-driven cooperative coevolution
Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution. Novelty search is an evolutionary technique that drives evolution toward behavioural novelty and diversity rather than exclusively pursuing a static objective. We evaluate three novelty-based approaches that rely on, respectively (1) the novelty of the team as a whole, (2) the novelty of the agentsβ individual behaviour, and (3) the combination of the two. We compare the proposed approaches with traditional fitness-driven cooperative coevolution in three simulated multirobot tasks. Our results show that team-level novelty scoring is the most effective approach, significantly outperforming fitness-driven coevolution at multiple levels. Novelty-driven cooperative coevolution can substantially increase the potential of CCEAs while maintaining a computational complexity that scales well with the number of populations.info:eu-repo/semantics/publishedVersio
ΠΠΈΠ½Π°ΠΌΠΈΠΊΠ° Π³Π΅Π½ΠΎΡΠΈΠΏΠ° Π² Π½Π΅ΠΉΡΠΎΡΠ²ΠΎΠ»ΡΡΠΈΠΈ Π°Π³Π΅Π½ΡΠΎΠ² Π² ΠΌΠΎΠ΄Π΅Π»ΡΡ ΠΈΡΠΊΡΡΡΡΠ²Π΅Π½Π½ΠΎΠΉ ΠΆΠΈΠ·Π½ΠΈ
ΠΠΎΠΎΠΏΠ΅ΡΠ°ΡΠΈΠ²Π½Π° ΠΏΠΎΠ²Π΅Π΄ΡΠ½ΠΊΠ° Ρ ΠΎΠ΄Π½ΡΡΡ Π· Π½Π°ΠΉΠ±ΡΠ»ΡΡ ΡΠ°ΡΡΠΎ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΠ²Π°Π½ΠΈΡ
ΡΠ° ΠΏΠΎΡΠΈΡΠ΅Π½ΠΈΡ
ΡΠΈΡ Π΄Π»Ρ Π±Π°Π³Π°ΡΠΎΠ°Π³Π΅Π½ΡΠ½ΠΈΡ
ΡΠΈΡΡΠ΅ΠΌ. Π£ Π΄Π΅ΡΠΊΠΈΡ
Π²ΠΈΠΏΠ°Π΄ΠΊΠ°Ρ
ΠΏΠΎΡΠ²Π° ΡΠ°ΠΊΠΎΡ ΠΏΠΎΠ²Π΅Π΄ΡΠ½ΠΊΠΈ ΠΏΠΎΠ²βΡΠ·Π°Π½Π° ΡΠ· ΠΏΠΎΠ΄ΡΠ»ΠΎΠΌ Π½Π°ΡΠ΅Π»Π΅Π½Π½Ρ Π½Π° ΡΠΏΡΠ²ΡΡΠ½ΡΡΡΡ ΡΡΠ±ΠΏΠΎΠΏΡΠ»ΡΡΡΡ [1, 2]. ΠΡΡΠΏΠΎΠ²Π° Π²Π·Π°ΡΠΌΠΎΠ΄ΡΡ ΠΌΠΎΠΆΠ΅ Π½Π°Π±ΡΠ²Π°ΡΠΈ Π½Π΅ Π»ΠΈΡΠ΅ ΡΠΎΡΠΌΠΈ Π°Π½ΡΠ°Π³ΠΎΠ½ΡΡΡΠΈΡΠ½ΠΎΠ³ΠΎ ΠΊΠΎΠ½ΡΠ»ΡΠΊΡΡ, Π°Π»Π΅ ΠΉ Π·ΡΠΌΠΎΠ²Π»ΡΠ²Π°ΡΠΈΡΡ Π³Π΅Π½Π΅ΡΠΈΡΠ½ΠΈΠΌ Π΄ΡΠ΅ΠΉΡΠΎΠΌ, ΡΠΊΠΈΠΉ ΠΏΡΠΈΠ²ΠΎΠ΄ΠΈΡΡ Π΄ΠΎ ΠΊΠΎΠ½ΠΊΡΡΠ΅Π½ΡΡΡ ΠΏΠΎΠ²Π΅Π΄ΡΠ½ΠΊΠΎΠ²ΠΈΡ
ΡΡΡΠ°ΡΠ΅Π³ΡΠΉ ΡΠ° ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡ Π°ΡΠΈΠΌΡΠ»ΡΡΡΡ [3]. ΠΡΠΎΠ΄Π΅ΠΌΠΎΠ½ΡΡΡΠΎΠ²Π°Π½ΠΎ ΡΡΠ·Π½Ρ Π²ΠΈΠ΄ΠΈ Π·Π°Π»Π΅ΠΆΠ½ΠΎΡΡΠ΅ΠΉ ΠΌΡΠΆ Π³ΡΡΠΏΠ°ΠΌΠΈ Π°Π³Π΅Π½ΡΡΠ² ΡΠ° ΡΡ
ΠΏΠΎΠ²Π΅Π΄ΡΠ½ΠΊΠΎΠ²ΠΈΠΌΠΈ ΡΡΡΠ°ΡΠ΅Π³ΡΡΠΌΠΈ. ΠΠΈΠΊΠΎΡΠΈΡΡΠ°Π½ΠΎ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ»ΠΎΠ³ΡΡ ΡΠΏΠΎΡΡΠ΅ΡΠ΅ΠΆΠ΅Π½Π½Ρ Π·Π° Π΄ΠΈΠ½Π°ΠΌΡΠΊΠΎΡ Π°Π³Π΅Π½ΡΠ½ΠΎΠ³ΠΎ Π³Π΅Π½ΠΎΡΠΈΠΏΡ [2], Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎ Π΄ΠΎ ΡΠΊΠΎΡ ΠΏΠΎΠΏΡΠ»ΡΡΡΡ Ρ ΠΏΡΠΎΡΡΠΎΡΡ Π³Π΅Π½ΠΎΡΠΈΠΏΡΠ² ΠΌΠΎΠΆΠ΅ ΠΌΠ°ΡΠΈ Π²ΠΈΠ³Π»ΡΠ΄ Ρ
ΠΌΠ°ΡΠΈ ΡΠΎΡΠΎΠΊ, ΠΊΠΎΠΆΠ½Π° ΡΠΎΡΠΊΠ° ΡΠΊΠΎΡ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π°Ρ ΠΎΠ΄Π½ΡΠΉ ΠΎΡΠΎΠ±ΠΈΠ½Ρ. Π ΠΎΠ·Π³Π»ΡΠ½ΡΡΠΎ Π΄ΠΈΠ½Π°ΠΌΡΠΊΡ ΡΠ΅Π½ΡΡΠΎΡΠ΄Π° Π½Π°ΡΠ΅Π»Π΅Π½Π½Ρ β ΡΠ΅Π½ΡΡΠ° Ρ
ΠΌΠ°ΡΠΈ Π³Π΅Π½ΠΎΡΠΈΠΏΡ. ΠΠ½Π°Π»ΡΠ· ΡΠ°ΠΊΠΈΡ
ΡΡΠ°ΡΠΊΡΠΎΡΡΠΉ ΠΌΠΎΠΆΠ΅ ΡΠΏΡΠΈΡΡΠΈ Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ ΡΡΠ·Π½ΠΈΡ
ΡΠ΅ΠΆΠΈΠΌΡΠ² ΡΡΠ½ΡΠ²Π°Π½Π½Ρ ΠΏΠΎΠΏΡΠ»ΡΡΡΡ ΡΠ° ΡΡ
Π·Π°ΡΠΎΠ΄ΠΆΠ΅Π½Π½Ρ.Cooperation behavior is one of the most used and spread Multi-agent system feature. In some cases emergence of this behaviour can be characterized by division of population on co-evolving subpopulations [1], [2]. Group interaction can take not only antagonistic conflict form but also genetic drift that results with strategies competition and assimilation [3]. In this work we demonstrate different relation between agent grouping and they behavior strategies. We use approach proposed in work [2] methodology of agent genotype dynamic tracking, due to this approach the evolving population can be presented in genotype space as a cloud of points where each point corresponds to one individual. In current work consider the movement of population centroid β the center of the genotype cloud. Analysis of such trajectories can shad the light on the regimes of population existence and genesis.ΠΠΎΠΎΠΏΠ΅ΡΠ°ΡΠΈΠ²Π½ΠΎΠ΅ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΎΠ΄Π½ΠΎΠΉ ΠΈΠ· Π½Π°ΠΈΠ±ΠΎΠ»Π΅Π΅ ΡΠ°ΡΡΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ
ΠΈ ΡΠ°ΡΠΏΡΠΎΡΡΡΠ°Π½Π΅Π½Π½ΡΡ
ΡΠ΅ΡΡ Π΄Π»Ρ ΠΌΠ½ΠΎΠ³ΠΎΠ°Π³Π΅Π½ΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ. Π Π½Π΅ΠΊΠΎΡΠΎΡΡΡ
ΡΠ»ΡΡΠ°ΡΡ
ΠΏΠΎΡΠ²Π»Π΅Π½ΠΈΠ΅ ΡΠ°ΠΊΠΎΠ³ΠΎ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΡ ΡΠ²ΡΠ·Π°Π½ΠΎ Ρ ΡΠ°Π·Π΄Π΅Π»Π΅Π½ΠΈΠ΅ΠΌ Π½Π°ΡΠ΅Π»Π΅Π½ΠΈΡ Π½Π° ΡΠΎΡΡΡΠ΅ΡΡΠ²ΡΡΡΠΈΠ΅ ΡΡΠ±ΠΏΠΎΠΏΡΠ»ΡΡΠΈΠΈ [1, 2]. ΠΡΡΠΏΠΏΠΎΠ²ΠΎΠ΅ Π²Π·Π°ΠΈΠΌΠΎΠ΄Π΅ΠΉΡΡΠ²ΠΈΠ΅ ΠΌΠΎΠΆΠ΅Ρ ΠΏΡΠΈΠ½ΠΈΠΌΠ°ΡΡ Π½Π΅ ΡΠΎΠ»ΡΠΊΠΎ ΡΠΎΡΠΌΡ Π°Π½ΡΠ°Π³ΠΎΠ½ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΊΠΎΠ½ΡΠ»ΠΈΠΊΡΠ°, Π½ΠΎ ΠΈ ΠΎΠ±ΡΡΠ»oΠ²Π»ΠΈΠ²Π°ΡΡΡΡ Π³Π΅Π½Π΅ΡΠΈΡΠ΅ΡΠΊΠΈΠΌ Π΄ΡΠ΅ΠΉΡΠΎΠΌ, ΠΏΡΠΈΠ²ΠΎΠ΄ΡΡΠΈΠΌ ΠΊ ΠΊΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠΈΠΈ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΡ
ΡΡΡΠ°ΡΠ΅Π³ΠΈΠΉ ΠΈ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΠΉ Π°ΡΡΠΈΠΌΠΈΠ»ΡΡΠΈΠΈ [3]. ΠΡΠΎΠ΄Π΅ΠΌΠΎΠ½ΡΡΡΠΈΡΠΎΠ²Π°Π½Ρ ΡΠ°Π·Π»ΠΈΡΠ½ΡΠ΅ Π²ΠΈΠ΄Ρ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠ΅ΠΉ ΠΌΠ΅ΠΆΠ΄Ρ Π³ΡΡΠΏΠΏΠ°ΠΌΠΈ Π°Π³Π΅Π½ΡΠΎΠ² ΠΈ ΠΈΡ
ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΡΠ΅ΡΠΊΠΈΠΌΠΈ ΡΡΡΠ°ΡΠ΅Π³ΠΈΡΠΌΠΈ. ΠΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½Π° ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ»ΠΎΠ³ΠΈΡ Π½Π°Π±Π»ΡΠ΄Π΅Π½ΠΈΡ Π·Π° Π΄ΠΈΠ½Π°ΠΌΠΈΠΊΠΎΠΉ Π°Π³Π΅Π½ΡΠ½ΠΎΠ³ΠΎ Π³Π΅Π½ΠΎΡΠΈΠΏΠ° [2], ΡΠΎΠ³Π»Π°ΡΠ½ΠΎ ΠΊΠΎΡΠΎΡΠΎΠΉ ΠΏΠΎΠΏΡΠ»ΡΡΠΈΡ ΠΌΠΎΠΆΠ΅Ρ Π±ΡΡΡ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½Π° Π² ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅ Π³Π΅Π½ΠΎΡΠΈΠΏΠΎΠ² Π² Π²ΠΈΠ΄Π΅ ΠΎΠ±Π»Π°ΠΊΠ° ΡΠΎΡΠ΅ΠΊ, Π³Π΄Π΅ ΠΊΠ°ΠΆΠ΄Π°Ρ ΡΠΎΡΠΊΠ° ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΠ΅Ρ ΠΎΠ΄Π½ΠΎΠΉ ΠΎΡΠΎΠ±ΠΈ. Π Π°ΡΡΠΌΠΎΡΡΠ΅Π½Π° Π΄ΠΈΠ½Π°ΠΌΠΈΠΊΠ° ΡΠ΅Π½ΡΡΠΎΠΈΠ΄Π° ΠΏΠΎΠΏΡΠ»ΡΡΠΈΠΈ β ΡΠ΅Π½ΡΡ ΠΎΠ±Π»Π°ΠΊΠ° Π³Π΅Π½ΠΎΡΠΈΠΏΠ°. ΠΠ½Π°Π»ΠΈΠ· ΡΠ°ΠΊΠΈΡ
ΡΡΠ°Π΅ΠΊΡΠΎΡΠΈΠΉ ΠΌΠΎΠΆΠ΅Ρ ΠΏΠΎΠΌΠΎΡΡ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΡΠ΅ΠΆΠΈΠΌΠΎΠ² ΡΡΡΠ΅ΡΡΠ²ΠΎΠ²Π°Π½ΠΈΡ ΠΏΠΎΠΏΡΠ»ΡΡΠΈΠΈ ΠΈ ΠΈΡ
Π·Π°ΡΠΎΠΆΠ΄Π΅Π½ΠΈΡ
Coevolution of Generative Adversarial Networks
Generative adversarial networks (GAN) became a hot topic, presenting
impressive results in the field of computer vision. However, there are still
open problems with the GAN model, such as the training stability and the
hand-design of architectures. Neuroevolution is a technique that can be used to
provide the automatic design of network architectures even in large search
spaces as in deep neural networks. Therefore, this project proposes COEGAN, a
model that combines neuroevolution and coevolution in the coordination of the
GAN training algorithm. The proposal uses the adversarial characteristic
between the generator and discriminator components to design an algorithm using
coevolution techniques. Our proposal was evaluated in the MNIST dataset. The
results suggest the improvement of the training stability and the automatic
discovery of efficient network architectures for GANs. Our model also partially
solves the mode collapse problem.Comment: Published in EvoApplications 201
- β¦