3,691 research outputs found

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented

    Comparison of genetic algorithms used to evolve specialisation in groups of robots

    Get PDF
    This paper investigates the role of genetic algorithms in determining which kind of specialisation emerges in decentralised simulated teams of robots controlled by evolved neural networks. As shown in previous works, different tasks may be better solved by robots specialized in a particular manner. However it was not clarified how much the genetic algorithm used might drive the evolution of one kind of specialisation or another: this is the goal of this paper. The study is conducted by evolving teams of robots that have to solve two different tasks that are better accomplished by using different types of specialisation (innate versus situated). Results suggest that the type of genetic algorithm employed plays a major role in determining how robots specialize and in most of the cases the algorithms used tend to always yield the same specialization. Only one of the algorithms tested led to the emergence of the most suitable kind of specialisation for each one of the two tasks

    Robust Multi-Cellular Developmental Design

    Get PDF
    This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics

    Language Model Crossover: Variation through Few-Shot Prompting

    Full text link
    This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes' offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes representable as text

    Quality Diversity: Harnessing Evolution to Generate a Diversity of High-Performing Solutions

    Get PDF
    Evolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of satisfying quality emerges. At the same time, research in open-ended evolution (OEE) draws different lessons from nature, seeking to identify and recreate processes that lead to the type of perpetual innovation and indefinitely increasing complexity observed in natural evolution. New algorithms in EC such as MAP-Elites and Novelty Search with Local Competition harness the toolkit of evolution for a related purpose: finding as many types of good solutions as possible (rather than merely the single best solution). With the field in its infancy, no empirical studies previously existed comparing these so-called quality diversity (QD) algorithms. This dissertation (1) contains the first extensive and methodical effort to compare different approaches to QD (including both existing published approaches as well as some new methods presented for the first time here) and to understand how they operate to help inform better approaches in the future. It also (2) introduces a new technique for encoding neural networks for evolution with indirect encoding that contain multiple sensory or output modalities. Further, it (3) explores the idea that QD can act as an engine of open-ended discovery by introducing an expressive platform called Voxelbuild where QD algorithms continually evolve robots that stack blocks in new ways. A culminating experiment (4) is presented that investigates evolution in Voxelbuild over a very long timescale. This research thus stands to advance the OEE community\u27s desire to create and understand open-ended systems while also laying the groundwork for QD to realize its potential within EC as a means to automatically generate an endless progression of new content in real-world applications

    Neural dynamics of social behavior : An evolutionary and mechanistic perspective on communication, cooperation, and competition among situated agents

    Get PDF
    Social behavior can be found on almost every level of life, ranging from microorganisms to human societies. However, explaining the evolutionary emergence of cooperation, communication, or competition still challenges modern biology. The most common approaches to this problem are based on game-theoretic models. The problem is that these models often assume fixed and limited rules and actions that individual agents can choose from, which excludes the dynamical nature of the mechanisms that underlie the behavior of living systems. So far, there exists a lack of convincing modeling approaches to investigate the emergence of social behavior from a mechanistic and evolutionary perspective. Instead of studying animals, the methodology employed in this thesis combines several aspects from alternative approaches to study behavior in a rather novel way. Robotic models are considered as individual agents which are controlled by recurrent neural networks representing non-linear dynamical system. The topology and parameters of these networks are evolved following an open-ended evolution approach, that is, individuals are not evaluated on high-level goals or optimized for specific functions. Instead, agents compete for limited resources to enhance their chance of survival. Further, there is no restriction with respect to how individuals interact with their environment or with each other. As its main objective, this thesis aims at a complementary approach for studying not only the evolution, but also the mechanisms of basic forms of communication. For this purpose it can be shown that a robot does not necessarily have to be as complex as a human, not even as complex as a bacterium. The strength of this approach is that it deals with rather simple, yet complete and situated systems, facing similar real world problems as animals do, such as sensory noise or dynamically changing environments. The experimental part of this thesis is substantiated in a five-part examination. First, self-organized aggregation patterns are discussed. Second, the advantages of evolving decentralized control with respect to behavioral robustness and flexibility is demonstrated. Third, it is shown that only minimalistic local acoustic communication is required to coordinate the behavior of large groups. This is followed by investigations of the evolutionary emergence of communication. Finally, it is shown how already evolved communicative behavior changes during further evolution when a population is confronted with competition about limited environmental resources. All presented experiments entail thorough analysis of the dynamical mechanisms that underlie evolved communication systems, which has not been done so far in the context of cooperative behavior. This framework leads to a better understanding of the relation between intrinsic neurodynamics and observable agent-environment interactions. The results discussed here provide a new perspective on the evolution of cooperation because they deal with aspects largely neglected in traditional approaches, aspects such as embodiment, situatedness, and the dynamical nature of the mechanisms that underlie behavior. For the first time, it can be demonstrated how noise influences specific signaling strategies and that versatile dynamics of very small-scale neural networks embedded in sensory-motor feedback loops give rise to sophisticated forms of communication such as signal coordination, cooperative intraspecific communication, and, most intriguingly, aggressive interspecific signaling. Further, the results demonstrate the development of counteractive niche construction based on a modification of communication strategies which generates an evolutionary feedback resulting in an active reduction of selection pressure, which has not been shown so far. Thus, the novel findings presented here strongly support the complementary nature of robotic experiments to study the evolution and mechanisms of communication and cooperation.</p

    Incremental embodied chaotic exploration of self-organized motor behaviors with proprioceptor adaptation

    Get PDF
    This paper presents a general and fully dynamic embodied artificial neural system, which incrementally explores and learns motor behaviors through an integrated combination of chaotic search and reflex learning. The former uses adaptive bifurcation to exploit the intrinsic chaotic dynamics arising from neuro-body-environment interactions, while the latter is based around proprioceptor adaptation. The overall iterative search process formed from this combination is shown to have a close relationship to evolutionary methods. The architecture developed here allows realtime goal-directed exploration and learning of the possible motor patterns (e.g., for locomotion) of embodied systems of arbitrary morphology. Examples of its successful application to a simple biomechanical model, a simulated swimming robot, and a simulated quadruped robot are given. The tractability of the biomechanical systems allows detailed analysis of the overall dynamics of the search process. This analysis sheds light on the strong parallels with evolutionary search

    The evolution of representation in simple cognitive networks

    Get PDF
    Representations are internal models of the environment that can provide guidance to a behaving agent, even in the absence of sensory information. It is not clear how representations are developed and whether or not they are necessary or even essential for intelligent behavior. We argue here that the ability to represent relevant features of the environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure R. To measure how R changes over time, we evolve two types of networks---an artificial neural network and a network of hidden Markov gates---to solve a categorization task using a genetic algorithm. We find that the capacity to represent increases during evolutionary adaptation, and that agents form representations of their environment during their lifetime. This ability allows the agents to act on sensorial inputs in the context of their acquired representations and enables complex and context-dependent behavior. We examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form as an agent behaves to solve a task. We conclude that R should be able to quantify the representations within any cognitive system, and should be predictive of an agent's long-term adaptive success.Comment: 36 pages, 10 figures, one Tabl

    Adaptive networks for robotics and the emergence of reward anticipatory circuits

    Get PDF
    Currently the central challenge facing evolutionary robotics is to determine how best to extend the range and complexity of behaviour supported by evolved neural systems. Implicit in the work described in this thesis is the idea that this might best be achieved through devising neural circuits (tractable to evolutionary exploration) that exhibit complementary functional characteristics. We concentrate on two problem domains; locomotion and sequence learning. For locomotion we compare the use of GasNets and other adaptive networks. For sequence learning we introduce a novel connectionist model inspired by the role of dopamine in the basal ganglia (commonly interpreted as a form of reinforcement learning). This connectionist approach relies upon a new neuron model inspired by notions of energy efficient signalling. Two reward adaptive circuit variants were investigated. These were applied respectively to two learning problems; where action sequences are required to take place in a strict order, and secondly, where action sequences are robust to intermediate arbitrary states. We conclude the thesis by proposing a formal model of functional integration, encompassing locomotion and sequence learning, extending ideas proposed by W. Ross Ashby. A general model of the adaptive replicator is presented, incoporating subsystems that are tuned to continuous variation and discrete or conditional events. Comparisons are made with Ross W. Ashby's model of ultrastability and his ideas on adaptive behaviour. This model is intended to support our assertion that, GasNets (and similar networks) and reward adaptive circuits of the type presented here, are intrinsically complementary. In conclusion we present some ideas on how the co-evolution of GasNet and reward adaptive circuits might lead us to significant improvements in the synthesis of agents capable of exhibiting complex adaptive behaviour

    Novelty-assisted Interactive Evolution Of Control Behaviors

    Get PDF
    The field of evolutionary computation is inspired by the achievements of natural evolution, in which there is no final objective. Yet the pursuit of objectives is ubiquitous in simulated evolution because evolutionary algorithms that can consistently achieve established benchmarks are lauded as successful, thus reinforcing this paradigm. A significant problem is that such objective approaches assume that intermediate stepping stones will increasingly resemble the final objective when in fact they often do not. The consequence is that while solutions may exist, searching for such objectives may not discover them. This problem with objectives is demonstrated through an experiment in this dissertation that compares how images discovered serendipitously during interactive evolution in an online system called Picbreeder cannot be rediscovered when they become the final objective of the very same algorithm that originally evolved them. This negative result demonstrates that pursuing an objective limits evolution by selecting offspring only based on the final objective. Furthermore, even when high fitness is achieved, the experimental results suggest that the resulting solutions are typically brittle, piecewise representations that only perform well by exploiting idiosyncratic features in the target. In response to this problem, the dissertation next highlights the importance of leveraging human insight during search as an alternative to articulating explicit objectives. In particular, a new approach called novelty-assisted interactive evolutionary computation (NA-IEC) combines human intuition with a method called novelty search for the first time to facilitate the serendipitous discovery of agent behaviors. iii In this approach, the human user directs evolution by selecting what is interesting from the on-screen population of behaviors. However, unlike in typical IEC, the user can then request that the next generation be filled with novel descendants, as opposed to only the direct descendants of typical IEC. The result of such an approach, unconstrained by a priori objectives, is that it traverses key stepping stones that ultimately accumulate meaningful domain knowledge. To establishes this new evolutionary approach based on the serendipitous discovery of key stepping stones during evolution, this dissertation consists of four key contributions: (1) The first contribution establishes the deleterious effects of a priori objectives on evolution. The second (2) introduces the NA-IEC approach as an alternative to traditional objective-based approaches. The third (3) is a proof-of-concept that demonstrates how combining human insight with novelty search finds solutions significantly faster and at lower genomic complexities than fully-automated processes, including pure novelty search, suggesting an important role for human users in the search for solutions. Finally, (4) the NA-IEC approach is applied in a challenge domain wherein leveraging human intuition and domain knowledge accelerates the evolution of solutions for the nontrivial octopus-arm control task. The culmination of these contributions demonstrates the importance of incorporating human insights into simulated evolution as a means to discovering better solutions more rapidly than traditional approaches
    corecore