32,248 research outputs found

    What does it take to evolve behaviorally complex organisms?

    Get PDF
    What genotypic features explain the evolvability of organisms that have to accomplish many different tasks? The genotype of behaviorally complex organisms may be more likely to encode modular neural architectures because neural modules dedicated to distinct tasks avoid neural interference, i.e., the arrival of conflicting messages for changing the value of connection weights during learning. However, if the connection weights for the various modules are genetically inherited, this raises the problem of genetic linkage: favorable mutations may fall on one portion of the genotype encoding one neural module and unfavorable mutations on another portion encoding another module. We show that this can prevent the genotype from reaching an adaptive optimum. This effect is different from other linkage effects described in the literature and we argue that it represents a new class of genetic constraints. Using simulations we show that sexual reproduction can alleviate the problem of genetic linkage by recombining separate modules all of which incorporate either favorable or unfavorable mutations. We speculate that this effect may contribute to the taxonomic prevalence of sexual reproduction among higher organisms. In addition to sexual recombination, the problem of genetic linkage for behaviorally complex organisms may be mitigated by entrusting evolution with the task of finding appropriate modular architectures and learning with the task of finding the appropriate connection weights for these architectures

    Evolutionary Algorithms for Reinforcement Learning

    Full text link
    There are two distinct approaches to solving reinforcement learning problems, namely, searching in value function space and searching in policy space. Temporal difference methods and evolutionary algorithms are well-known examples of these approaches. Kaelbling, Littman and Moore recently provided an informative survey of temporal difference methods. This article focuses on the application of evolutionary algorithms to the reinforcement learning problem, emphasizing alternative policy representations, credit assignment methods, and problem-specific genetic operators. Strengths and weaknesses of the evolutionary approach to reinforcement learning are presented, along with a survey of representative applications

    Optimizing Neural Architecture Search using Limited GPU Time in a Dynamic Search Space: A Gene Expression Programming Approach

    Full text link
    Efficient identification of people and objects, segmentation of regions of interest and extraction of relevant data in images, texts, audios and videos are evolving considerably in these past years, which deep learning methods, combined with recent improvements in computational resources, contributed greatly for this achievement. Although its outstanding potential, development of efficient architectures and modules requires expert knowledge and amount of resource time available. In this paper, we propose an evolutionary-based neural architecture search approach for efficient discovery of convolutional models in a dynamic search space, within only 24 GPU hours. With its efficient search environment and phenotype representation, Gene Expression Programming is adapted for network's cell generation. Despite having limited GPU resource time and broad search space, our proposal achieved similar state-of-the-art to manually-designed convolutional networks and also NAS-generated ones, even beating similar constrained evolutionary-based NAS works. The best cells in different runs achieved stable results, with a mean error of 2.82% in CIFAR-10 dataset (which the best model achieved an error of 2.67%) and 18.83% for CIFAR-100 (best model with 18.16%). For ImageNet in the mobile setting, our best model achieved top-1 and top-5 errors of 29.51% and 10.37%, respectively. Although evolutionary-based NAS works were reported to require a considerable amount of GPU time for architecture search, our approach obtained promising results in little time, encouraging further experiments in evolutionary-based NAS, for search and network representation improvements.Comment: Accepted for presentation at the IEEE Congress on Evolutionary Computation (IEEE CEC) 202

    A circular model for song motor control in Serinus canaria

    Get PDF
    Song production in songbirds is controlled by a network of nuclei distributed across several brain regions, which drives respiratory and vocal motor systems to generate sound. We built a model for birdsong production, whose variables are the average activities of different neural populations within these nuclei of the song system. We focus on the predictions of respiratory patterns of song, because these can be easily measured and therefore provide a validation for the model. We test the hypothesis that it is possible to construct a model in which (1) the activity of an expiratory related (ER) neural population fits the observed pressure patterns used by canaries during singing, and (2) a higher forebrain neural population, HVC, is sparsely active, simultaneously with significant motor instances of the pressure patterns. We show that in order to achieve these two requirements, the ER neural population needs to receive two inputs: a direct one, and its copy after being processed by other areas of the song system. The model is capable of reproducing the measured respiratory patterns and makes specific predictions on the timing of HVC activity during their production. These results suggest that vocal production is controlled by a circular network rather than by a simple top-down architecture.Fil: Alonso, Rodrigo. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física. Laboratorio de Sistemas Dinámicos; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Trevisan, Marcos Alberto. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física. Laboratorio de Sistemas Dinámicos; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Amador, Ana. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física. Laboratorio de Sistemas Dinámicos; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Goller, Franz. University Of Utah. Department Of Biology; Estados UnidosFil: Mindlin, Bernardo Gabriel. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física. Laboratorio de Sistemas Dinámicos; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    A Unifying Theory of Biological Function

    Get PDF
    A new theory that naturalizes biological function is explained and compared with earlier etiological and causal role theories. Etiological theories explain functions from how they are caused over their evolutionary history. Causal role theories analyze how functional mechanisms serve the current capacities of their containing system. The new proposal unifies the key notions of both kinds of theories, but goes beyond them by explaining how functions in an organism can exist as factors with autonomous causal efficacy. The goal-directedness and normativity of functions exist in this strict sense as well. The theory depends on an internal physiological or neural process that mimics an organism’s fitness, and modulates the organism’s variability accordingly. The structure of the internal process can be subdivided into subprocesses that monitor specific functions in an organism. The theory matches well with each intuition on a previously published list of intuited ideas about biological functions, including intuitions that have posed difficulties for other theories

    Dynamical transitions in the evolution of learning algorithms by selection

    Get PDF
    We study the evolution of artificial learning systems by means of selection. Genetic programming is used to generate a sequence of populations of algorithms which can be used by neural networks for supervised learning of a rule that generates examples. In opposition to concentrating on final results, which would be the natural aim while designing good learning algorithms, we study the evolution process and pay particular attention to the temporal order of appearance of functional structures responsible for the improvements in the learning process, as measured by the generalization capabilities of the resulting algorithms. The effect of such appearances can be described as dynamical phase transitions. The concepts of phenotypic and genotypic entropies, which serve to describe the distribution of fitness in the population and the distribution of symbols respectively, are used to monitor the dynamics. In different runs the phase transitions might be present or not, with the system finding out good solutions, or staying in poor regions of algorithm space. Whenever phase transitions occur, the sequence of appearances are the same. We identify combinations of variables and operators which are useful in measuring experience or performance in rule extraction and can thus implement useful annealing of the learning schedule.Comment: 11 pages, 11 figures, 2 table
    • …
    corecore