421 research outputs found

    Evolution and Analysis of Embodied Spiking Neural Networks Reveals Task-Specific Clusters of Effective Networks

    Full text link
    Elucidating principles that underlie computation in neural networks is currently a major research topic of interest in neuroscience. Transfer Entropy (TE) is increasingly used as a tool to bridge the gap between network structure, function, and behavior in fMRI studies. Computational models allow us to bridge the gap even further by directly associating individual neuron activity with behavior. However, most computational models that have analyzed embodied behaviors have employed non-spiking neurons. On the other hand, computational models that employ spiking neural networks tend to be restricted to disembodied tasks. We show for the first time the artificial evolution and TE-analysis of embodied spiking neural networks to perform a cognitively-interesting behavior. Specifically, we evolved an agent controlled by an Izhikevich neural network to perform a visual categorization task. The smallest networks capable of performing the task were found by repeating evolutionary runs with different network sizes. Informational analysis of the best solution revealed task-specific TE-network clusters, suggesting that within-task homogeneity and across-task heterogeneity were key to behavioral success. Moreover, analysis of the ensemble of solutions revealed that task-specificity of TE-network clusters correlated with fitness. This provides an empirically testable hypothesis that links network structure to behavior.Comment: Camera ready version of accepted for GECCO'1

    The evolutionary emergence of neural organisation in computational models of primitive organisms

    Get PDF
    Over the decades, the question why did neural organisation emerge in the way that it did? has proved to be massively elusive. Whilst much of the literature paints a picture of common ancestry the idea that a species at the root of the tree of nervous system evolution spawned numerous descendants the actual evolutionary forces responsible for such changes, major transitions or otherwise, have been less clear. The view presented in this thesis is that via interactions with the environment, neural organisation has emerged in concert with the constraints enforced by body plan morphology and a need to process information eciently and robustly. Whilst these factors are two smaller parts of a much greater whole, their impact during the evolutionary process cannot be ignored, for they are fundamentally signicant. Thus computer simulations have been developed to provide insight into how neural organisation of an articial agent should emerge given the constraints of its body morphology, its symmetry, feedback from the environment, and a loss of energy. The first major finding is that much of the computational process of the nervous system can be ooaded to the body morphology, which has a commensurate bearing on neural architecture, neural dynamics and motor symmetry. The second major finding is that sensory feedback strengthens the dynamic coupling between the neural system and the body plan morphology, resulting in minimal neural circuitry yet more ecient agent behaviour. The third major finding is that under the constraint of energy loss, neural circuitry again emerges to be minimalistic. Throughout, an emphasis is placed on the coupling between the nervous system and body plan morphology which are known in the literature to be tightly integrated; accordingly, both are considered on equal footings

    Fate of Duplicated Neural Structures

    Full text link
    Statistical mechanics determines the abundance of different arrangements of matter depending on cost-benefit balances. Its formalism and phenomenology percolate throughout biological processes and set limits to effective computation. Under specific conditions, self-replicating and computationally complex patterns become favored, yielding life, cognition, and Darwinian evolution. Neurons and neural circuits sit at a crossroads between statistical mechanics, computation, and (through their role in cognition) natural selection. Can we establish a {\em statistical physics} of neural circuits? Such theory would tell what kinds of brains to expect under set energetic, evolutionary, and computational conditions. With this big picture in mind, we focus on the fate of duplicated neural circuits. We look at examples from central nervous systems, with a stress on computational thresholds that might prompt this redundancy. We also study a naive cost-benefit balance for duplicated circuits implementing complex phenotypes. From this we derive {\em phase diagrams} and (phase-like) transitions between single and duplicated circuits, which constrain evolutionary paths to complex cognition. Back to the big picture, similar phase diagrams and transitions might constrain I/O and internal connectivity patterns of neural circuits at large. The formalism of statistical mechanics seems a natural framework for thsi worthy line of research.Comment: Review with novel results. Position paper. 16 pages, 3 figure

    Genetic algorithmic parameter optimisation of a recurrent spiking neural network model

    Get PDF
    Neural networks are complex algorithms that loosely model the behaviour of the human brain. They play a significant role in computational neuroscience and artificial intelligence. The next generation of neural network models is based on the spike timing activity of neurons: spiking neural networks (SNNs). However, model parameters in SNNs are difficult to search and optimise. Previous studies using genetic algorithm (GA) optimisation of SNNs were focused mainly on simple, feedforward, or oscillatory networks, but not much work has been done on optimising cortex-like recurrent SNNs. In this work, we investigated the use of GAs to search for optimal parameters in recurrent SNNs to reach targeted neuronal population firing rates, e.g. as in experimental observations. We considered a cortical column based SNN comprising 1000 Izhikevich spiking neurons for computational efficiency and biologically realism. The model parameters explored were the neuronal biased input currents. First, we found for this particular SNN, the optimal parameter values for targeted population averaged firing activities, and the convergence of algorithm by ~100 generations. We then showed that the GA optimal population size was within ~16-20 while the crossover rate that returned the best fitness value was ~0.95. Overall, we have successfully demonstrated the feasibility of implementing GA to optimise model parameters in a recurrent cortical based SNN.Comment: 6 pages, 6 figure

    Towards Sensorimotor Coupling of a Spiking Neural Network and Deep Reinforcement Learning for Robotics Application

    Get PDF
    Deep reinforcement learning augments the reinforcement learning framework and utilizes the powerful representation of deep neural networks. Recent works have demonstrated the great achievements of deep reinforcement learning in various domains including finance,medicine, healthcare, video games, robotics and computer vision.Deep neural network was started with multi-layer perceptron (1stgeneration) and developed to deep neural networks (2ndgeneration)and it is moving forward to spiking neural networks which are knownas3rdgeneration of neural networks. Spiking neural networks aim to bridge the gap between neuroscience and machine learning, using biologically-realistic models of neurons to carry out computation. In this thesis, we first provide a comprehensive review on both spiking neural networks and deep reinforcement learning with emphasis on robotic applications. Then we will demonstrate how to develop a robotics application for context-aware scene understanding to perform sensorimotor coupling. Our system contains two modules corresponding to scene understanding and robotic navigation. The first module is implemented as a spiking neural network to carry out semantic segmentation to understand the scene in front of the robot. The second module provides a high-level navigation command to robot, which is considered as an agent and implemented by online reinforcement learning. The module was implemented with biologically plausible local learning rule that allows the agent to adopt quickly to the environment. To benchmark our system, we have tested the first module on Oxford-IIIT Pet dataset and the second module on the custom-made Gym environment. Our experimental results have proven that our system is able present the competitive results with deep neural network in segmentation task and adopts quickly to the environment

    From Biological Synapses to "Intelligent" Robots

    Get PDF
    This selective review explores biologically inspired learning as a model for intelligent robot control and sensing technology on the basis of specific examples. Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence, as explained on the basis of examples from the highly plastic biological neural networks of invertebrates and vertebrates. Its potential for adaptive learning and control without supervision, the generation of functional complexity, and control architectures based on self-organization is brought forward. Learning without prior knowledge based on excitatory and inhibitory neural mechanisms accounts for the process through which survival-relevant or task-relevant representations are either reinforced or suppressed. The basic mechanisms of unsupervised biological learning drive synaptic plasticity and adaptation for behavioral success in living brains with different levels of complexity. The insights collected here point toward the Hebbian model as a choice solution for “intelligent” robotics and sensor systems. Keywords: Hebbian learning; synaptic plasticity; neural networks; self-organization; brain; reinforcement; sensory processing; robot contro

    Perspectives on adaptive dynamical systems

    Get PDF
    Adaptivity is a dynamical feature that is omnipresent in nature, socio-economics, and technology. For example, adaptive couplings appear in various real-world systems like the power grid, social, and neural networks, and they form the backbone of closed-loop control strategies and machine learning algorithms. In this article, we provide an interdisciplinary perspective on adaptive systems. We reflect on the notion and terminology of adaptivity in different disciplines and discuss which role adaptivity plays for various fields. We highlight common open challenges, and give perspectives on future research directions, looking to inspire interdisciplinary approaches.Comment: 46 pages, 9 figure

    Contraction and partial contraction : a study of synchronization in nonlinear networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.Includes bibliographical references (p. 121-128).This thesis focuses on the study of collective dynamic behaviors, especially the spontaneous synchronization behavior, of nonlinear networked systems. We derives a body of new results, based on contraction and partial contraction analysis. Contraction is a property regarding the convergence between two arbitrary system trajectories. A nonlinear dynamic system is called contracting if initial conditions or temporary disturbances are forgotten exponentially fast. Partial contraction, introduced in this thesis, is a straightforward but more general application of contraction. It extends contraction analysis to include convergence to behaviors or to specific properties (such as equality of state components, or convergence to a manifold). Contraction and partial contraction provide powerful analysis tools to investigate the stability of large-scale complex systems. For diffusively coupled nonlinear systems, for instance, a general synchronization condition can be derived which connects synchronization rate to net- work structure explicitly. The results are applied to construct flocking or schooling models by extending to coupled networks with switching topology. We further study the networked systems with different kinds of group leaders, one specifying global orientation (power leader), another holding target dynamics (knowledge leader). In a knowledge-based leader-followers network, the followers obtain dynamics information from the leader through adaptive learning. We also study distributed networks with non-negligible time-delays by using simplified wave variables and other contraction-oriented analysis. Conditions for contraction to be preserved regardless of the explicit values of the time-delays are derived.(cont.) Synchronization behavior is shown to be robust if the protocol is linear. Finally, we study the construction of spike-based neural network models, and the development of simple mechanisms for fast inhibition and de-synchronization.by Wei Wang.Ph.D
    corecore