8 research outputs found

    Reinforcement learning in populations of spiking neurons

    Get PDF
    Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning

    Unveiling the role of plasticity rules in reservoir computing

    Full text link
    Reservoir Computing (RC) is an appealing approach in Machine Learning that combines the high computational capabilities of Recurrent Neural Networks with a fast and easy training method. Likewise, successful implementation of neuro-inspired plasticity rules into RC artificial networks has boosted the performance of the original models. In this manuscript, we analyze the role that plasticity rules play on the changes that lead to a better performance of RC. To this end, we implement synaptic and non-synaptic plasticity rules in a paradigmatic example of RC model: the Echo State Network. Testing on nonlinear time series prediction tasks, we show evidence that improved performance in all plastic models are linked to a decrease of the pair-wise correlations in the reservoir, as well as a significant increase of individual neurons ability to separate similar inputs in their activity space. Here we provide new insights on this observed improvement through the study of different stages on the plastic learning. From the perspective of the reservoir dynamics, optimal performance is found to occur close to the so-called edge of instability. Our results also show that it is possible to combine different forms of plasticity (namely synaptic and non-synaptic rules) to further improve the performance on prediction tasks, obtaining better results than those achieved with single-plasticity models

    Phenomenological models of synaptic plasticity based on spike timing

    Get PDF
    Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations

    Neurones glycinergiques et transmission inhibitrice dans les noyaux cérébelleux

    Get PDF
    The cerebellum is composed of a three-layered cortex and of nuclei and is responsible for the learned fine control of posture and movements. I combined a genetic approach (based on the use of transgenic mouse lines) with anatomical tracings, immunohistochemical stainings, electrophysiological recordings and optogenetic stimulations to establish the distinctive characteristics of the inhibitory neurons of the cerebellar nuclei and to detail their connectivity and their role in the cerebellar circuitry.We showed that the glycinergic inhibitory neurons of the cerebellar nuclei constitute a distinct neuronal population and are characterized by their mixed inhibitory GABAergic/glycinergic phenotype. Those inhibitory neurons are also distinguished by their axonal plexus which includes a local arborization with the cerebellar nuclei where they contact principal output neurons and a projection to the granular layer of the cerebellar cortex where they end onto Golgi cells dendrites. Finally, the inhibitory neurons of the cerebellar nuclei receive inhibitory afferents from Purkinje cells and may be contacted by mossy fibers or climbing fibers.We provided the first evidence of functional mixed transmission in the cerebellar nuclei and the first demonstration of a mixed inhibitory nucleo-cortical projection. Overall, our data establish the inhibitory neurons as the third cellular component of the cerebellar nuclei. Their importance in the modular organization of the cerebellum and their impact on sensory-motor integration need to be confirmed by optogenetic experiments in vivo.Le cervelet, composé d'un cortex et de noyaux, est responsable du contrôle moteur fin des mouvements et de la posture. En combinant une approche génétique (basée sur l'utilisation de lignées de souris transgéniques) avec des traçages anatomiques, des marquages immunohistochimiques et des expériences d'électrophysiologie et d'optogénétique, nous établissons les caractères distinctifs des neurones inhibiteurs des noyaux cérébelleux et en détaillons la connectivité ainsi que les fonctions dans le circuit cérébelleux. Les neurones inhibiteurs glycinergiques des noyaux profonds constituent une population de neurones distincts des autres types cellulaires identifiables par leur phénotype inhibiteur mixte GABAergique/glycinergique. Ces neurones se distinguent également par leur plexus axonal qui comporte une arborisation locale dans les noyaux cérébelleux où ils contactent les neurones principaux et une projection vers le cortex cérébelleux où ils contactent les cellules de Golgi. Ces neurones inhibiteurs reçoivent également des afférences inhibitrices des cellules de Purkinje et pourraient être contactés par les fibres moussues ou les fibres grimpantes.Nous apportons ainsi la première étude d'une transmission mixte fonctionnelle par les neurones inhibiteurs des noyaux cérébelleux, projetant à la fois dans les noyaux et le cortex cérébelleux. L'ensemble de nos données établissent les neurones inhibiteurs mixtes des noyaux cérébelleux comme la troisième composante cellulaire des noyaux profonds. Leur importance dans l'organisation modulaire du cervelet, ainsi que leur impact sur l'intégration sensori-motrice, devront être confirmés par des études optogénétiques in vivo

    Simulation and Theory of Large-Scale Cortical Networks

    Get PDF
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume in the cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models. In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory. The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks. More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings. In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks
    corecore