2,481 research outputs found
Enhancing Cooperative Coevolution for Large Scale Optimization by Adaptively Constructing Surrogate Models
It has been shown that cooperative coevolution (CC) can effectively deal with
large scale optimization problems (LSOPs) through a divide-and-conquer
strategy. However, its performance is severely restricted by the current
context-vector-based sub-solution evaluation method since this method needs to
access the original high dimensional simulation model when evaluating each
sub-solution and thus requires many computation resources. To alleviate this
issue, this study proposes an adaptive surrogate model assisted CC framework.
This framework adaptively constructs surrogate models for different
sub-problems by fully considering their characteristics. For the single
dimensional sub-problems obtained through decomposition, accurate enough
surrogate models can be obtained and used to find out the optimal solutions of
the corresponding sub-problems directly. As for the nonseparable sub-problems,
the surrogate models are employed to evaluate the corresponding sub-solutions,
and the original simulation model is only adopted to reevaluate some good
sub-solutions selected by surrogate models. By these means, the computation
cost could be greatly reduced without significantly sacrificing evaluation
quality. Empirical studies on IEEE CEC 2010 benchmark functions show that the
concrete algorithm based on this framework is able to find much better
solutions than the conventional CC algorithms and a non-CC algorithm even with
much fewer computation resources.Comment: arXiv admin note: text overlap with arXiv:1802.0974
Approximating n-player behavioural strategy nash equilibria using coevolution
Coevolutionary algorithms are plagued with a set of problems related to intransitivity that make it questionable what the end product of a coevolutionary run can achieve. With the introduction of solution concepts into coevolution, part of the issue was alleviated, however efficiently representing and achieving game theoretic solution concepts is still not a trivial task. In this paper we propose a coevolutionary algorithm that approximates behavioural strategy Nash equilibria in n-player zero sum games, by exploiting the minimax solution concept. In order to support our case we provide a set of experiments in both games of known and unknown equilibria. In the case of known equilibria, we can confirm our algorithm converges to the known solution, while in the case of unknown equilibria we can see a steady progress towards Nash. Copyright 2011 ACM
Virtual player design using self-learning via competitive coevolutionary algorithms
The Google Artificial Intelligence (AI) Challenge
is an international contest the objective of which is to program the AI in a two-player real time strategy (RTS) game. This AI is an autonomous computer program that governs the actions that one of the two players executes during the game according to the state of play. The entries
are evaluated via a competition mechanism consisting of two-player rounds where each entry is tested against others.
This paper describes the use of competitive coevolutionary (CC) algorithms for the automatic generation of winning game strategies in Planet Wars, the RTS game associated with the 2010 contest. Three different versions of a prime
algorithm have been tested. Their common nexus is not only the use of a Hall-of-Fame (HoF) to keep note of the winners of past coevolutions but also the employment of an archive of experienced players, termed the hall-of-celebrities
(HoC), that puts pressure on the optimization process and guides the search to increase the strength of the solutions; their differences come from the periodical updating of the HoF on the basis of quality and diversity metrics.
The goal is to optimize the AI by means of a self-learning process guided by coevolutionary search and competitive evaluation. An empirical study on the performance of a number of variants of the proposed algorithms is described and a statistical analysis of the results is conducted. In addition to the attainment of competitive bots we also
conclude that the incorporation of the HoC inside the primary algorithm helps to reduce the effects of cycling caused by the use of HoF in CC algorithms.This work is partially supported by Spanish
MICINN under Project ANYSELF (TIN2011-28627-C04-01),3 by Junta de Andalucía under Project P10-TIC-6083 (DNEMESIS) and by Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech
Evolution of Swarm Robotics Systems with Novelty Search
Novelty search is a recent artificial evolution technique that challenges
traditional evolutionary approaches. In novelty search, solutions are rewarded
based on their novelty, rather than their quality with respect to a predefined
objective. The lack of a predefined objective precludes premature convergence
caused by a deceptive fitness function. In this paper, we apply novelty search
combined with NEAT to the evolution of neural controllers for homogeneous
swarms of robots. Our empirical study is conducted in simulation, and we use a
common swarm robotics task - aggregation, and a more challenging task - sharing
of an energy recharging station. Our results show that novelty search is
unaffected by deception, is notably effective in bootstrapping the evolution,
can find solutions with lower complexity than fitness-based evolution, and can
find a broad diversity of solutions for the same task. Even in non-deceptive
setups, novelty search achieves solution qualities similar to those obtained in
traditional fitness-based evolution. Our study also encompasses variants of
novelty search that work in concert with fitness-based evolution to combine the
exploratory character of novelty search with the exploitatory character of
objective-based evolution. We show that these variants can further improve the
performance of novelty search. Overall, our study shows that novelty search is
a promising alternative for the evolution of controllers for robotic swarms.Comment: To appear in Swarm Intelligence (2013), ANTS Special Issue. The final
publication will be available at link.springer.co
A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization
Large-scale optimization problems that involve thousands of decision
variables have extensively arisen from various industrial areas. As a powerful
optimization tool for many real-world applications, evolutionary algorithms
(EAs) fail to solve the emerging large-scale problems both effectively and
efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA
that can not only produce high-quality solution by solving sub-problems
separately, but also highly utilizes the power of parallel computing by solving
the sub-problems simultaneously. Existing DC-based EAs that were deemed to
enjoy the same advantages of the proposed algorithm, are shown to be
practically incompatible with the parallel computing scheme, unless some
trade-offs are made by compromising the solution quality.Comment: 12 pages, 0 figure
Open-ended Learning in Symmetric Zero-sum Games
Zero-sum games such as chess and poker are, abstractly, functions that
evaluate pairs of agents, for example labeling them `winner' and `loser'. If
the game is approximately transitive, then self-play generates sequences of
agents of increasing strength. However, nontransitive games, such as
rock-paper-scissors, can exhibit strategic cycles, and there is no longer a
clear objective -- we want agents to increase in strength, but against whom is
unclear. In this paper, we introduce a geometric framework for formulating
agent objectives in zero-sum games, in order to construct adaptive sequences of
objectives that yield open-ended learning. The framework allows us to reason
about population performance in nontransitive games, and enables the
development of a new algorithm (rectified Nash response, PSRO_rN) that uses
game-theoretic niching to construct diverse populations of effective agents,
producing a stronger set of agents than existing algorithms. We apply PSRO_rN
to two highly nontransitive resource allocation games and find that PSRO_rN
consistently outperforms the existing alternatives.Comment: ICML 2019, final versio
- …