2,713 research outputs found
Evaluating MAP-Elites on Constrained Optimization Problems
Constrained optimization problems are often characterized by multiple
constraints that, in the practice, must be satisfied with different tolerance
levels. While some constraints are hard and as such must be satisfied with
zero-tolerance, others may be soft, such that non-zero violations are
acceptable. Here, we evaluate the applicability of MAP-Elites to "illuminate"
constrained search spaces by mapping them into feature spaces where each
feature corresponds to a different constraint. On the one hand, MAP-Elites
implicitly preserves diversity, thus allowing a good exploration of the search
space. On the other hand, it provides an effective visualization that
facilitates a better understanding of how constraint violations correlate with
the objective function. We demonstrate the feasibility of this approach on a
large set of benchmark problems, in various dimensionalities, and with
different algorithmic configurations. As expected, numerical results show that
a basic version of MAP-Elites cannot compete on all problems (especially those
with equality constraints) with state-of-the-art algorithms that use gradient
information or advanced constraint handling techniques. Nevertheless, it has a
higher potential at finding constraint violations vs. objectives trade-offs and
providing new problem information. As such, it could be used in the future as
an effective building-block for designing new constrained optimization
algorithms
Learning the Designer's Preferences to Drive Evolution
This paper presents the Designer Preference Model, a data-driven solution
that pursues to learn from user generated data in a Quality-Diversity
Mixed-Initiative Co-Creativity (QD MI-CC) tool, with the aims of modelling the
user's design style to better assess the tool's procedurally generated content
with respect to that user's preferences. Through this approach, we aim for
increasing the user's agency over the generated content in a way that neither
stalls the user-tool reciprocal stimuli loop nor fatigues the user with
periodical suggestion handpicking. We describe the details of this novel
solution, as well as its implementation in the MI-CC tool the Evolutionary
Dungeon Designer. We present and discuss our findings out of the initial tests
carried out, spotting the open challenges for this combined line of research
that integrates MI-CC with Procedural Content Generation through Machine
Learning.Comment: 16 pages, Accepted and to appear in proceedings of the 23rd European
Conference on the Applications of Evolutionary and bio-inspired Computation,
EvoApplications 202
Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space
We focus on the challenge of finding a diverse collection of quality
solutions on complex continuous domains. While quality diver-sity (QD)
algorithms like Novelty Search with Local Competition (NSLC) and MAP-Elites are
designed to generate a diverse range of solutions, these algorithms require a
large number of evaluations for exploration of continuous spaces. Meanwhile,
variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are
among the best-performing derivative-free optimizers in single-objective
continuous domains. This paper proposes a new QD algorithm called Covariance
Matrix Adaptation MAP-Elites (CMA-ME). Our new algorithm combines the
self-adaptation techniques of CMA-ES with archiving and mapping techniques for
maintaining diversity in QD. Results from experiments based on standard
continuous optimization benchmarks show that CMA-ME finds better-quality
solutions than MAP-Elites; similarly, results on the strategic game Hearthstone
show that CMA-ME finds both a higher overall quality and broader diversity of
strategies than both CMA-ES and MAP-Elites. Overall, CMA-ME more than doubles
the performance of MAP-Elites using standard QD performance metrics. These
results suggest that QD algorithms augmented by operators from state-of-the-art
optimization algorithms can yield high-performing methods for simultaneously
exploring and optimizing continuous search spaces, with significant
applications to design, testing, and reinforcement learning among other
domains.Comment: Accepted to GECCO 202
Generating Levels That Teach Mechanics
The automatic generation of game tutorials is a challenging AI problem. While
it is possible to generate annotations and instructions that explain to the
player how the game is played, this paper focuses on generating a gameplay
experience that introduces the player to a game mechanic. It evolves small
levels for the Mario AI Framework that can only be beaten by an agent that
knows how to perform specific actions in the game. It uses variations of a
perfect A* agent that are limited in various ways, such as not being able to
jump high or see enemies, to test how failing to do certain actions can stop
the player from beating the level.Comment: 8 pages, 7 figures, PCG Workshop at FDG 2018, 9th International
Workshop on Procedural Content Generation (PCG2018
- …