53,102 research outputs found

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented

    Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

    Full text link
    A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.Comment: Evolutionary Computation Journa

    Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

    Get PDF
    A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.Comment: Evolutionary Computation Journa

    Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity

    Get PDF
    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework

    EMERGING THE EMERGENCE SOCIOLOGY: The Philosophical Framework of Agent-Based Social Studies

    Get PDF
    The structuration theory originally provided by Anthony Giddens and the advance improvement of the theory has been trying to solve the dilemma came up in the epistemological aspects of the social sciences and humanity. Social scientists apparently have to choose whether they are too sociological or too psychological. Nonetheless, in the works of the classical sociologist, Emile Durkheim, this thing has been stated long time ago. The usage of some models to construct the bottom-up theories has followed the vast of computational technology. This model is well known as the agent based modeling. This paper is giving a philosophical perspective of the agent-based social sciences, as the sociology to cope the emergent factors coming up in the sociological analysis. The framework is made by using the artificial neural network model to show how the emergent phenomena came from the complex system. Understanding the society has self-organizing (autopoietic) properties, the Kohonen’s self-organizing map is used in the paper. By the simulation examples, it can be seen obviously that the emergent phenomena in social system are seen by the sociologist apart from the qualitative framework on the atomistic sociology. In the end of the paper, it is clear that the emergence sociology is needed for sharpening the sociological analysis in the emergence sociology

    Creativity as Cognitive design \ud The case of mesoscopic variables in Meta-Structures\ud

    Get PDF
    Creativity is an open problem which has been differently approached by several disciplines since a long time. In this contribution we consider as creative the constructivist design an observer does on the description levels of complex phenomena, such as the self-organized and emergent ones ( e.g., Bènard rollers, Belousov-Zhabotinsky reactions, flocks, swarms, and more radical cognitive and social emergences). We consider this design as related to the Gestaltian creation of a language fit for representing natural processes and the observer in an integrated way. Organised systems, both artificial and most of the natural ones are designed/ modelled according to a logical closed model which masters all the inter-relation between their constitutive elements, and which can be described by an algorithm or a single formal model. We will show there that logical openness and DYSAM (Dynamical Usage of Models) are the proper tools for those phenomena which cannot be described by algorithms or by a single formal model. The strong correlation between emergence and creativity suggests that an open model is the best way to provide a formal definition of creativity. A specific application relates to the possibility to shape the emergence of Collective Behaviours. Different modelling approaches have been introduced, based on symbolic as well as sub-symbolic rules of interaction to simulate collective phenomena by means of computational emergence. Another approach is based on modelling collective phenomena as sequences of Multiple Systems established by percentages of conceptually interchangeable agents taking on the same roles at different times and different roles at the same time. In the Meta-Structures project we propose to use mesoscopic variables as creative design, invention, good continuity and imitation of the description level. In the project we propose to define the coherence of sequences of Multiple Systems by using the values taken on by the dynamic mesoscopic clusters of its constitutive elements, such as the instantaneous number of elements having, in a flock, the same speed, distance from their nearest neighbours, direction and altitude. In Meta-Structures the collective behaviour’s coherence corresponds, for instance, to the scalar values taken by speed, distance, direction and altitude along time, through statistical strategies of interpolation, quasi-periodicity, levels of ergodicity and their reciprocal relationship. In this case the constructivist role of the observer is considered creative as it relates to neither non-linear replication nor transposition of levels of description and models used for artificial systems, like reductionism. Creativity rather lies in inventing new mesoscopic variables able to identify coherent patterns in complex systems. As it is known, mesoscopic variables represent partial macroscopic properties of a system by using some of the microscopic degrees of freedom possessed by composing elements. Such partial usage of microscopic as well as macroscopic properties allows a kind of Gestaltian continuity and imitation between levels of descriptions for mesoscopic modelling. \ud \u
    • …
    corecore