165 research outputs found

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset

    Towards learning inverse kinematics with a neural network based tracking controller

    Get PDF
    Learning an inverse kinematic model of a robot is a well studied subject. However, achieving this without information about the geometric characteristics of the robot is less investigated. In this work, a novel control approach is presented based on a recurrent neural network. Without any prior knowledge about the robot, this control strategy learns to control the iCub’s robot arm online by solving the inverse kinematic problem in its control region. Because of its exploration strategy the robot starts to learn by generating and observing random motor behavior. The modulation and generalization capabilities of this approach are investigated as well

    Making brain–machine interfaces robust to future neural variability

    Get PDF
    A major hurdle to clinical translation of brain-machine interfaces (BMIs) is that current decoders, which are trained from a small quantity of recent data, become ineffective when neural recording conditions subsequently change. We tested whether a decoder could be made more robust to future neural variability by training it to handle a variety of recording conditions sampled from months of previously collected data as well as synthetic training data perturbations. We developed a new multiplicative recurrent neural network BMI decoder that successfully learned a large variety of neural-to-kinematic mappings and became more robust with larger training data sets. Here we demonstrate that when tested with a non-human primate preclinical BMI model, this decoder is robust under conditions that disabled a state-of-the-art Kalman filter-based decoder. These results validate a new BMI strategy in which accumulated data history are effectively harnessed, and may facilitate reliable BMI use by reducing decoder retraining downtime

    A new theoretical framework jointly explains behavioral and neural variability across subjects performing flexible decision-making

    Full text link
    The ability to flexibly select and accumulate relevant information to form decisions, while ignoring irrelevant information, is a fundamental component of higher cognition. Yet its neural mechanisms remain unclear. Here we demonstrate that, under assumptions supported by both monkey and rat data, the space of possible network mechanisms to implement this ability is spanned by the combination of three different components, each with specific behavioral and anatomical implications. We further show that existing electrophysiological and modeling data are compatible with the full variety of possible combinations of these components, suggesting that different individuals could use different component combinations. To study variations across subjects, we developed a rat task requiring context-dependent evidence accumulation, and trained many subjects on it. Our task delivers sensory evidence through pulses that have random but precisely known timing, providing high statistical power to characterize each individual’s neural and behavioral responses. Consistent with theoretical predictions, neural and behavioral analysis revealed remarkable heterogeneity across rats, despite uniformly good task performance. The theory further predicts a specific link between behavioral and neural signatures, which was robustly supported in the data. Our results provide a new experimentally-supported theoretical framework to analyze biological and artificial systems performing flexible decision-making tasks, and open the door to the study of individual variability in neural computations underlying higher cognition

    Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns.

    Get PDF
    The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs-locomotor bouts-matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior

    Transferring Learning from External to Internal Weights in Echo-State Networks with Sparse Connectivity

    Get PDF
    Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this “transfer of learning” is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a “self-sensing” network state, and we compare and contrast this with compressed sensing

    ReservoirPy: an Efficient and User-Friendly Library to Design Echo State Networks

    Get PDF
    International audienceWe present a simple user-friendly library called ReservoirPy based on Python scientific modules. It provides a flexible interface to implement efficient Reservoir Computing (RC) architectures with a particular focus on Echo State Networks (ESN). Advanced features of ReservoirPy allow to improve up to 87.9% of computation time efficiency on a simple laptop compared to basic Python implementation. Overall, we provide tutorials for hyperparameters tuning, offline and online training, fast spectral initialization, parallel and sparse matrix computation on various tasks (MackeyGlass and audio recognition tasks). In particular, we provide graphical tools to easily explore hyperparameters using random search with the help of the hyperopt library
    corecore