874 research outputs found

    Particle-filtering approaches for nonlinear Bayesian decoding of neuronal spike trains

    Full text link
    The number of neurons that can be simultaneously recorded doubles every seven years. This ever increasing number of recorded neurons opens up the possibility to address new questions and extract higher dimensional stimuli from the recordings. Modeling neural spike trains as point processes, this task of extracting dynamical signals from spike trains is commonly set in the context of nonlinear filtering theory. Particle filter methods relying on importance weights are generic algorithms that solve the filtering task numerically, but exhibit a serious drawback when the problem dimensionality is high: they are known to suffer from the 'curse of dimensionality' (COD), i.e. the number of particles required for a certain performance scales exponentially with the observable dimensions. Here, we first briefly review the theory on filtering with point process observations in continuous time. Based on this theory, we investigate both analytically and numerically the reason for the COD of weighted particle filtering approaches: Similarly to particle filtering with continuous-time observations, the COD with point-process observations is due to the decay of effective number of particles, an effect that is stronger when the number of observable dimensions increases. Given the success of unweighted particle filtering approaches in overcoming the COD for continuous- time observations, we introduce an unweighted particle filter for point-process observations, the spike-based Neural Particle Filter (sNPF), and show that it exhibits a similar favorable scaling as the number of dimensions grows. Further, we derive rules for the parameters of the sNPF from a maximum likelihood approach learning. We finally employ a simple decoding task to illustrate the capabilities of the sNPF and to highlight one possible future application of our inference and learning algorithm

    Synaptic mechanisms of interference in working memory

    Get PDF
    Information from preceding trials of cognitive tasks can bias performance in the current trial, a phenomenon referred to as interference. Subjects performing visual working memory tasks exhibit interference in their trial-to-trial response correlations: the recalled target location in the current trial is biased in the direction of the target presented on the previous trial. We present modeling work that (a) develops a probabilistic inference model of this history-dependent bias, and (b) links our probabilistic model to computations of a recurrent network wherein short-term facilitation accounts for the dynamics of the observed bias. Network connectivity is reshaped dynamically during each trial, providing a mechanism for generating predictions from prior trial observations. Applying timescale separation methods, we can obtain a low-dimensional description of the trial-to-trial bias based on the history of target locations. The model has response statistics whose mean is centered at the true target location across many trials, typical of such visual working memory tasks. Furthermore, we demonstrate task protocols for which the plastic model performs better than a model with static connectivity: repetitively presented targets are better retained in working memory than targets drawn from uncorrelated sequences.Comment: 28 pages, 7 figure

    Natural-gradient learning for spiking neurons

    Get PDF
    In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural gradient descent.Comment: Joint senior authorship: Walter M. Senn and Mihai A. Petrovic

    DEVELOPMENT OF A CEREBELLAR MEAN FIELD MODEL: THE THEORETICAL FRAMEWORK, THE IMPLEMENTATION AND THE FIRST APPLICATION

    Get PDF
    Brain modeling constantly evolves to improve the accuracy of the simulated brain dynamics with the ambitious aim to build a digital twin of the brain. Specific models tuned on brain regions specific features empower the brain simulations introducing bottom-up physiology properties into data-driven simulators. Despite the cerebellum contains 80 % of the neurons and is deeply involved in a wide range of functions, from sensorimotor to cognitive ones, a specific cerebellar model is still missing. Furthermore, its quasi-crystalline multi-layer circuitry deeply differs from the cerebral cortical one, therefore is hard to imagine a unique general model suitable for the realistic simulation of both cerebellar and cerebral cortex. The present thesis tackles the challenge of developing a specific model for the cerebellum. Specifically, multi-neuron multi-layer mean field (MF) model of the cerebellar network, including Granule Cells, Golgi Cells, Molecular Layer Interneurons, and Purkinje Cells, was implemented, and validated against experimental data and the corresponding spiking neural network microcircuit model. The cerebellar MF model was built using a system of interdependent equations, where the single neuronal populations and topological parameters were captured by neuron-specific inter- dependent Transfer Functions. The model time resolution was optimized using Local Field Potentials recorded experimentally with high-density multielectrode array from acute mouse cerebellar slices. The present MF model satisfactorily captured the average discharge of different microcircuit neuronal populations in response to various input patterns and was able to predict the changes in Purkinje Cells firing patterns occurring in specific behavioral conditions: cortical plasticity mapping, which drives learning in associative tasks, and Molecular Layer Interneurons feed-forward inhibition, which controls Purkinje Cells activity patterns. The cerebellar multi-layer MF model thus provides a computationally efficient tool that will allow to investigate the causal relationship between microscopic neuronal properties and ensemble brain activity in health and pathological conditions. Furthermore, preliminary attempts to simulate a pathological cerebellum were done in the perspective of introducing our multi-layer cerebellar MF model in whole-brain simulators to realize patient-specific treatments, moving ahead towards personalized medicine. Two preliminary works assessed the relevant impact of the cerebellum on whole-brain dynamics and its role in modulating complex responses in causal connected cerebral regions, confirming that a specific model is required to further investigate the cerebellum-on- cerebrum influence. The framework presented in this thesis allows to develop a multi-layer MF model depicting the features of a specific brain region (e.g., cerebellum, basal ganglia), in order to define a general strategy to build up a pool of biology grounded MF models for computationally feasible simulations. Interconnected bottom-up MF models integrated in large-scale simulators would capture specific features of different brain regions, while the applications of a virtual brain would have a substantial impact on the reality ranging from the characterization of neurobiological processes, subject-specific preoperative plans, and development of neuro-prosthetic devices

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics

    Natural-gradient learning for spiking neurons.

    Get PDF
    In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent

    Dynamics of Lage Spiking Neural Networks

    Get PDF
    • …
    corecore