9 research outputs found

    Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space

    Full text link
    We focus on the challenge of finding a diverse collection of quality solutions on complex continuous domains. While quality diver-sity (QD) algorithms like Novelty Search with Local Competition (NSLC) and MAP-Elites are designed to generate a diverse range of solutions, these algorithms require a large number of evaluations for exploration of continuous spaces. Meanwhile, variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are among the best-performing derivative-free optimizers in single-objective continuous domains. This paper proposes a new QD algorithm called Covariance Matrix Adaptation MAP-Elites (CMA-ME). Our new algorithm combines the self-adaptation techniques of CMA-ES with archiving and mapping techniques for maintaining diversity in QD. Results from experiments based on standard continuous optimization benchmarks show that CMA-ME finds better-quality solutions than MAP-Elites; similarly, results on the strategic game Hearthstone show that CMA-ME finds both a higher overall quality and broader diversity of strategies than both CMA-ES and MAP-Elites. Overall, CMA-ME more than doubles the performance of MAP-Elites using standard QD performance metrics. These results suggest that QD algorithms augmented by operators from state-of-the-art optimization algorithms can yield high-performing methods for simultaneously exploring and optimizing continuous search spaces, with significant applications to design, testing, and reinforcement learning among other domains.Comment: Accepted to GECCO 202

    Illuminating Mario Scenes in the Latent Space of a Generative Adversarial Network

    Full text link
    Generative adversarial networks (GANs) are quickly becoming a ubiquitous approach to procedurally generating video game levels. While GAN generated levels are stylistically similar to human-authored examples, human designers often want to explore the generative design space of GANs to extract interesting levels. However, human designers find latent vectors opaque and would rather explore along dimensions the designer specifies, such as number of enemies or obstacles. We propose using state-of-the-art quality diversity algorithms designed to optimize continuous spaces, i.e. MAP-Elites with a directional variation operator and Covariance Matrix Adaptation MAP-Elites, to efficiently explore the latent space of a GAN to extract levels that vary across a set of specified gameplay measures. In the benchmark domain of Super Mario Bros, we demonstrate how designers may specify gameplay measures to our system and extract high-quality (playable) levels with a diverse range of level mechanics, while still maintaining stylistic similarity to human authored examples. An online user study shows how the different mechanics of the automatically generated levels affect subjective ratings of their perceived difficulty and appearance.Comment: Accepted to AAAI 202

    Scaling MAP-Elites to Deep Neuroevolution

    Get PDF
    Quality-Diversity (QD) algorithms, and MAP-Elites (ME) in particular, have proven very useful for a broad range of applications including enabling real robots to recover quickly from joint damage, solving strongly deceptive maze tasks or evolving robot morphologies to discover new gaits. However, present implementations of MAP-Elites and other QD algorithms seem to be limited to low-dimensional controllers with far fewer parameters than modern deep neural network models. In this paper, we propose to leverage the efficiency of Evolution Strategies (ES) to scale MAP-Elites to high-dimensional controllers parameterized by large neural networks. We design and evaluate a new hybrid algorithm called MAP-Elites with Evolution Strategies (ME-ES) for post-damage recovery in a difficult high-dimensional control task where traditional ME fails. Additionally, we show that ME-ES performs efficient exploration, on par with state-of-the-art exploration algorithms in high-dimensional control tasks with strongly deceptive rewards.Comment: Accepted to GECCO 202

    Evolving the behavior of machines: from micro to macroevolution

    Get PDF
    International audienceEvolution gave rise to creatures that are arguably more sophisticated than the greatest human-designed systems. This feat has inspired computer scientists since the advent of computing and led to optimization tools that can evolve complex neural networks for machines-an approach known as "neuroevolution". After a few successes in designing evolvable representations for high-dimensional artifacts, the field has been recently revitalized by going beyond optimization: to many, the wonder of evolution is less in the perfect optimization of each species than in the creativity of such a simple iterative process, that is, in the diversity of species. This modern view of artificial evolution is moving the field away from microevolution, following a fitness gradient in a niche, to macroevolution, filling many niches with highly different species. It already opened promising applications, like evolving gait repertoires, video game levels for different tastes, and diverse designs for aerodynamic bikes

    Analysis of gameplay strategies in hearthstone: a data science approach

    Get PDF
    In recent years, games have been a popular test bed for AI research, and the presence of Collectible Card Games (CCGs) in that space is still increasing. One such CCG for both competitive/casual play and AI research is Hearthstone, a two-player adversarial game where players seeks to implement one of several gameplay strategies to defeat their opponent and decrease all of their Health points to zero. Although some open source simulators exist, some of their methodologies for simulated agents create opponents with a relatively low skill level. Using evolutionary algorithms, this thesis seeks to evolve agents with a higher skill level than those implemented in one such simulator, SabberStone. New benchmarks are propsed using supervised learning techniques to predict gameplay strategies from game data, and using unsupervised learning techniques to discover and visualize patterns that may be used in player modeling to differentiate gameplay strategies

    Evolutionary Diversity Optimisation for Combinatorial Problems

    Get PDF
    Diversity optimisation explores a variety of solutions for the intended problem and is rapidly growing and getting more popular within the evolutionary computation community as a result. There can be found several studies that introduce and examine evolutionary approaches to compute a diverse set of solutions for optimisation problems in the continuous domain. To the best of our knowledge, the discrete problems are yet to be studied in the context of diversity optimisation. Thus, this thesis focuses on combinatorial optimisation problems with discrete solution spaces. Here, we compute and explore such solution sets for several noticeable combinatorial problems. We aim to introduce and design evolutionary algorithms capable of computing a diverse set of solutions for the given combinatorial optimisation problem. First, we begin with a comprehensive literature review of the recent developments and then dig deep into two prominent diverse paradigms in evolutionary computation: evolutionary diversity optimisation and quality diversity. These concepts have gained a considerable amount of attention in recent years. Quality diversity aims to achieve diversity in behavioural spaces, while evolutionary diversity optimisation sees diversity in the structural properties of solutions. We study the evolutionary algorithms for the travelling salesperson problem, the travelling thief program, the knapsack problem, and finally, the Boolean satisfiability problem. The prospective results demonstrate the capability of the introduced algorithms to achieve diverse and high-quality solutions.Thesis (Ph.D.) -- University of Adelaide, School of Computer and Mathematical Sciences, 202
    corecore