28 research outputs found

    Open-ended Search through Minimal Criterion Coevolution

    Get PDF
    Search processes guided by objectives are ubiquitous in machine learning. They iteratively reward artifacts based on their proximity to an optimization target, and terminate upon solution space convergence. Some recent studies take a different approach, capitalizing on the disconnect between mainstream methods in artificial intelligence and the field\u27s biological inspirations. Natural evolution has an unparalleled propensity for generating well-adapted artifacts, but these artifacts are decidedly non-convergent. This new class of non-objective algorithms induce a divergent search by rewarding solutions according to their novelty with respect to prior discoveries. While the diversity of resulting innovations exhibit marked parallels to natural evolution, the methods by which search is driven remain unnatural. In particular, nature has no need to characterize and enforce novelty; rather, it is guided by a single, simple constraint: survive long enough to reproduce. The key insight is that such a constraint, called the minimal criterion, can be harnessed in a coevolutionary context where two populations interact, finding novel ways to satisfy their reproductive constraint with respect to each other. Among the contributions of this dissertation, this approach, called minimal criterion coevolution (MCC), is the primary (1). MCC is initially demonstrated in a maze domain (2) where it evolves increasingly complex mazes and solutions. An enhancement to the initial domain (3) is then introduced, allowing mazes to expand unboundedly and validating MCC\u27s propensity for open-ended discovery. A more natural method of diversity preservation through resource limitation (4) is introduced and shown to maintain population diversity without comparing genetic distance. Finally, MCC is demonstrated in an evolutionary robotics domain (5) where it coevolves increasingly complex bodies with brain controllers to achieve principled locomotion. The overall benefit of these contributions is a novel, general, algorithmic framework for the continual production of open-ended dynamics without the need for a characterization of behavioral novelty

    Evolutionary Reinforcement Learning: A Survey

    Full text link
    Reinforcement learning (RL) is a machine learning approach that trains agents to maximize cumulative rewards through interactions with environments. The integration of RL with deep learning has recently resulted in impressive achievements in a wide range of challenging tasks, including board games, arcade games, and robot control. Despite these successes, there remain several crucial challenges, including brittle convergence properties caused by sensitive hyperparameters, difficulties in temporal credit assignment with long time horizons and sparse rewards, a lack of diverse exploration, especially in continuous search space scenarios, difficulties in credit assignment in multi-agent reinforcement learning, and conflicting objectives for rewards. Evolutionary computation (EC), which maintains a population of learning agents, has demonstrated promising performance in addressing these limitations. This article presents a comprehensive survey of state-of-the-art methods for integrating EC into RL, referred to as evolutionary reinforcement learning (EvoRL). We categorize EvoRL methods according to key research fields in RL, including hyperparameter optimization, policy search, exploration, reward shaping, meta-RL, and multi-objective RL. We then discuss future research directions in terms of efficient methods, benchmarks, and scalable platforms. This survey serves as a resource for researchers and practitioners interested in the field of EvoRL, highlighting the important challenges and opportunities for future research. With the help of this survey, researchers and practitioners can develop more efficient methods and tailored benchmarks for EvoRL, further advancing this promising cross-disciplinary research field

    Learning Curricula in Open-Ended Worlds

    Full text link
    Deep reinforcement learning (RL) provides powerful methods for training optimal sequential decision-making agents. As collecting real-world interactions can entail additional costs and safety risks, the common paradigm of sim2real conducts training in a simulator, followed by real-world deployment. Unfortunately, RL agents easily overfit to the choice of simulated training environments, and worse still, learning ends when the agent masters the specific set of simulated environments. In contrast, the real world is highly open-ended, featuring endlessly evolving environments and challenges, making such RL approaches unsuitable. Simply randomizing over simulated environments is insufficient, as it requires making arbitrary distributional assumptions and can be combinatorially less likely to sample specific environment instances that are useful for learning. An ideal learning process should automatically adapt the training environment to maximize the learning potential of the agent over an open-ended task space that matches or surpasses the complexity of the real world. This thesis develops a class of methods called Unsupervised Environment Design (UED), which aim to produce such open-ended processes. Given an environment design space, UED automatically generates an infinite sequence or curriculum of training environments at the frontier of the learning agent's capabilities. Through extensive empirical studies and theoretical arguments founded on minimax-regret decision theory and game theory, the findings in this thesis show that UED autocurricula can produce RL agents exhibiting significantly improved robustness and generalization to previously unseen environment instances. Such autocurricula are promising paths toward open-ended learning systems that achieve more general intelligence by continually generating and mastering additional challenges of their own design.Comment: PhD dissertatio

    Learning Curricula in Open-Ended Worlds

    Get PDF
    Deep reinforcement learning (RL) provides powerful methods for training optimal sequential decision-making agents. As collecting real-world interactions can entail additional costs and safety risks, the common paradigm of sim2real conducts training in a simulator, followed by real-world deployment. Unfortunately, RL agents easily overfit to the choice of simulated training environments, and worse still, learning ends when the agent masters the specific set of simulated environments. In contrast, the real-world is highly open-ended—featuring endlessly evolving environments and challenges, making such RL approaches unsuitable. Simply randomizing across a large space of simulated environments is insufficient, as it requires making arbitrary distributional assumptions, and as the design space grows, it can become combinatorially less likely to sample specific environment instances that are useful for learning. An ideal learning process should automatically adapt the training environment to maximize the learning potential of the agent over an open-ended task space that matches or surpasses the complexity of the real world. This thesis develops a class of methods called Unsupervised Environment Design (UED), which seeks to enable such an open-ended process via a principled approach for gradually improving the robustness and generality of the learning agent. Given a potentially open-ended environment design space, UED automatically generates an infinite sequence or curriculum of training environments at the frontier of the learning agent’s capabilities. Through both extensive empirical studies and theoretical arguments founded on minimax-regret decision theory and game theory, the findings in this thesis show that UED autocurricula can produce RL agents exhibiting significantly improved robustness and generalization to previously unseen environment instances. Such autocurricula are promising paths toward open-ended learning systems that approach general intelligence—a long sought-after ambition of artificial intelligence research—by continually generating and mastering additional challenges of their own design

    Evolving developmental, recurrent and convolutional neural networks for deliberate motion planning in sparse reward tasks

    Get PDF
    Motion planning algorithms have seen a diverse set of approaches in a variety of disciplines. In the domain of artificial evolutionary systems, motion planning has been included in models to achieve sophisticated deliberate behaviours. These algorithms rely on fixed rules or little evolutionary influence which compels behaviours to conform within those specific policies, rather than allowing the model to establish its own specialised behaviour. In order to further these models, the constraints imposed by planning algorithms must be removed to grant greater evolutionary control over behaviours. That is the focus of this thesis. An examination of prevailing neuroevolution methods led to the use of two distinct approaches, NEAT and HyperNEAT. Both were used to gain an understanding of the components necessary to create neuroevolution planning. The findings accumulated in the formation of a novel convolutional neural network architecture with a recurrent convolution process. The architecture’s goal was to iteratively disperse local activations to greater regions of the feature space. Experimentation showed significantly improved robustness over contemporary neuroevolution techniques as well as an efficiency increase over a static rule set. Greater evolutionary responsibility is given to the model with multiple network combinations; all of which continually demonstrated the necessary behaviours. In comparison, these behaviours were shown to be difficult to achieve in a state-of-the-art deep convolutional network. Finally, the unique use of recurrent convolution is relocated to a larger convolutional architecture on an established benchmarking platform. Performance improvements are seen on a number of domains which illustrates that this recurrent mechanism can be exploited in alternative areas outside of planning. By presenting a viable neuroevolution method for motion planning a potential emerges for further systems to adopt and examine the capability of this work in prospective domains, as well as further avenues of experimentation in convolutional architectures

    Self-adaptive fitness in evolutionary processes

    Get PDF
    Most optimization algorithms or methods in artificial intelligence can be regarded as evolutionary processes. They start from (basically) random guesses and produce increasingly better results with respect to a given target function, which is defined by the process's designer. The value of the achieved results is communicated to the evolutionary process via a fitness function that is usually somewhat correlated with the target function but does not need to be exactly the same. When the values of the fitness function change purely for reasons intrinsic to the evolutionary process, i.e., even though the externally motivated goals (as represented by the target function) remain constant, we call that phenomenon self-adaptive fitness. We trace the phenomenon of self-adaptive fitness back to emergent goals in artificial chemistry systems, for which we develop a new variant based on neural networks. We perform an in-depth analysis of diversity-aware evolutionary algorithms as a prime example of how to effectively integrate self-adaptive fitness into evolutionary processes. We sketch the concept of productive fitness as a new tool to reason about the intrinsic goals of evolution. We introduce the pattern of scenario co-evolution, which we apply to a reinforcement learning agent competing against an evolutionary algorithm to improve performance and generate hard test cases and which we also consider as a more general pattern for software engineering based on a solid formal framework. Multiple connections to related topics in natural computing, quantum computing and artificial intelligence are discovered and may shape future research in the combined fields.Die meisten Optimierungsalgorithmen und die meisten Verfahren in Bereich künstlicher Intelligenz können als evolutionäre Prozesse aufgefasst werden. Diese beginnen mit (prinzipiell) zufällig geratenen Lösungskandidaten und erzeugen dann immer weiter verbesserte Ergebnisse für gegebene Zielfunktion, die der Designer des gesamten Prozesses definiert hat. Der Wert der erreichten Ergebnisse wird dem evolutionären Prozess durch eine Fitnessfunktion mitgeteilt, die normalerweise in gewissem Rahmen mit der Zielfunktion korreliert ist, aber auch nicht notwendigerweise mit dieser identisch sein muss. Wenn die Werte der Fitnessfunktion sich allein aus für den evolutionären Prozess intrinsischen Gründen ändern, d.h. auch dann, wenn die extern motivierten Ziele (repräsentiert durch die Zielfunktion) konstant bleiben, nennen wir dieses Phänomen selbst-adaptive Fitness. Wir verfolgen das Phänomen der selbst-adaptiven Fitness zurück bis zu künstlichen Chemiesystemen (artificial chemistry systems), für die wir eine neue Variante auf Basis neuronaler Netze entwickeln. Wir führen eine tiefgreifende Analyse diversitätsbewusster evolutionärer Algorithmen durch, welche wir als Paradebeispiel für die effektive Integration von selbst-adaptiver Fitness in evolutionäre Prozesse betrachten. Wir skizzieren das Konzept der produktiven Fitness als ein neues Werkzeug zur Untersuchung von intrinsischen Zielen der Evolution. Wir führen das Muster der Szenarien-Ko-Evolution (scenario co-evolution) ein und wenden es auf einen Agenten an, der mittels verstärkendem Lernen (reinforcement learning) mit einem evolutionären Algorithmus darum wetteifert, seine Leistung zu erhöhen bzw. härtere Testszenarien zu finden. Wir erkennen dieses Muster auch in einem generelleren Kontext als formale Methode in der Softwareentwicklung. Wir entdecken mehrere Verbindungen der besprochenen Phänomene zu Forschungsgebieten wie natural computing, quantum computing oder künstlicher Intelligenz, welche die zukünftige Forschung in den kombinierten Forschungsgebieten prägen könnten

    Evolvability and organismal architecture:The blind watchmaker and the reminiscent architect

    Get PDF
    Organisms are constantly faced with the challenge of adapting to new circumstances. In this thesis, I argue that the ability to adapt to new circumstances, “evolvability”, is deeply ingrained in the genetic, developmental, morphological, and physiological architecture of organisms. Using a blend of conceptual research, theoretical modelling, and multidisciplinary studies, I demonstrate how organismal architecture can evolve so that organisms can cope better and better with future environmental challenges. As a first step, I systematically classify the many factors contributing to evolvability. Then I use a simulation approach to show how evolvability-enhancing structures can readily evolve in gene-regulatory networks. This happens via the evolution of "mutational transformers" - structural elements that convert random mutations at the genetic level into adaptation-enhancing mutations at the phenotypic level. In another thesis chapter, I demonstrate that even if selection acts only sporadically, complex adaptations can evolve and persist over long time periods. In other words, complex adaptations do not require constant selection pressure. In an interdisciplinary contribution, I apply biological insights regarding the properties of an evolvability-enhancing mutation structure to the design of algorithms used in Artificial Intelligence. The result is the “Facilitated Mutation” method which enhances the performance of the algorithms in various respects, highlighting the potential for leveraging biological principles in computational sciences. Finally, I embed my research findings in a philosophical context. I emphasise the importance of organismal architecture in retaining evolutionary memories and suggest future research directions to further enhance our understanding of evolvability
    corecore