102 research outputs found

    Particle-filtering approaches for nonlinear Bayesian decoding of neuronal spike trains

    Full text link
    The number of neurons that can be simultaneously recorded doubles every seven years. This ever increasing number of recorded neurons opens up the possibility to address new questions and extract higher dimensional stimuli from the recordings. Modeling neural spike trains as point processes, this task of extracting dynamical signals from spike trains is commonly set in the context of nonlinear filtering theory. Particle filter methods relying on importance weights are generic algorithms that solve the filtering task numerically, but exhibit a serious drawback when the problem dimensionality is high: they are known to suffer from the 'curse of dimensionality' (COD), i.e. the number of particles required for a certain performance scales exponentially with the observable dimensions. Here, we first briefly review the theory on filtering with point process observations in continuous time. Based on this theory, we investigate both analytically and numerically the reason for the COD of weighted particle filtering approaches: Similarly to particle filtering with continuous-time observations, the COD with point-process observations is due to the decay of effective number of particles, an effect that is stronger when the number of observable dimensions increases. Given the success of unweighted particle filtering approaches in overcoming the COD for continuous- time observations, we introduce an unweighted particle filter for point-process observations, the spike-based Neural Particle Filter (sNPF), and show that it exhibits a similar favorable scaling as the number of dimensions grows. Further, we derive rules for the parameters of the sNPF from a maximum likelihood approach learning. We finally employ a simple decoding task to illustrate the capabilities of the sNPF and to highlight one possible future application of our inference and learning algorithm

    A generalized priority-based model for smartphone screen touches

    Get PDF
    The distribution of intervals between human actions such as email posts or keyboard strokes demonstrates distinct properties at short vs long time scales. For instance, at long time scales, which are presumably controlled by complex process such as planning and decision making, it has been shown that those inter-event intervals follow a scale-invariant (or power-law) distribution. In contrast, at shorter time-scales - which are governed by different process such as sensorimotor skill - they do not follow the same distribution and little do we know how they relate to the scale-invariant pattern. Here, we analyzed 9 millions intervals between smartphone screen touches of 84 individuals which span several orders of magnitudes (from milliseconds to hours). To capture these intervals, we extend a priority-based generative model to smartphone touching events. At short-time scale, the model is governed by refractory effects, while at longer time scales, the inter-touch intervals are governed by the priority difference between smartphone tasks and other tasks. The flexibility of the model allows to capture inter-individual variations at short and long time scales while its tractability enables efficient model fitting. According to our model, each individual has a specific power-low exponent which is tightly related to the effective refractory time constant suggesting that motor processes which influence the fast actions are related to the higher cognitive processes governing the longer inter-event intervals.Comment: 11 pages, 6 figures, 1 tabl

    Interactions between short-term and long-term plasticity: shooting for a moving target

    Get PDF
    Far from being static transmission units, synapses are highly dynamical elements that change over multiple time scales depending on the history of the neural activity of both the pre- and postsynaptic neuron. Moreover, synaptic changes on different time scales interact: long-term plasticity (LTP) can modify the properties of short-term plasticity (STP) in the same synapse. Most existing theories of synaptic plasticity focus on only one of these time scales (either STP or LTP or late-LTP) and the theoretical principles underlying their interactions are thus largely unknown. Here we develop a normative model of synaptic plasticity that combines both STP and LTP and predicts specific patterns for their interactions. Recently, it has been proposed that STP arranges for the local postsynaptic membrane potential at a synapse to behave as an optimal estimator of the presynaptic membrane potential based on the incoming spikes. Here we generalize this approach by considering an optimal estimator of a non-linear function of the membrane potential and the long-term synaptic efficacy -- which itself may be subject to change on a slower time scale. We find that an increase in the long-term synaptic efficacy necessitates changes in the dynamics of STP. More precisely, for a realistic non-linear function to be estimated, our model predicts that after the induction of LTP, causing long-term synaptic efficacy to increase, a depressing synapse should become even more depressing. That is, in a protocol using trains of presynaptic stimuli, as the initial EPSP becomes stronger due to LTP, subsequent EPSPs should become weakened and this weakening should be more pronounced with LTP. This form of redistribution of synaptic efficacies agrees well with electrophysiological data on synapses connecting layer 5 pyramidal neurons

    A statistical model for in vivo neuronal dynamics

    Get PDF
    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. Finally, we show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.Comment: 31 pages, 10 figure

    The Hitchhiker's Guide to Nonlinear Filtering

    Get PDF
    Nonlinear filtering is the problem of online estimation of a dynamic hidden variable from incoming data and has vast applications in different fields, ranging from engineering, machine learning, economic science and natural sciences. We start our review of the theory on nonlinear filtering from the simplest `filtering' task we can think of, namely static Bayesian inference. From there we continue our journey through discrete-time models, which is usually encountered in machine learning, and generalize to and further emphasize continuous-time filtering theory. The idea of changing the probability measure connects and elucidates several aspects of the theory, such as the parallels between the discrete- and continuous-time problems and between different observation models. Furthermore, it gives insight into the construction of particle filtering algorithms. This tutorial is targeted at scientists and engineers and should serve as an introduction to the main ideas of nonlinear filtering, and as a segway to more advanced and specialized literature.Comment: 64 page

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles

    Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes

    Full text link
    We revisit the problem of estimating the parameters of a partially observed diffusion process, consisting of a hidden state process and an observed process, with a continuous time parameter. The estimation is to be done online, i.e. the parameter estimate should be updated recursively based on the observation filtration. Here, we use an old but under-exploited representation of the incomplete-data log-likelihood function in terms of the filter of the hidden state from the observations. By performing a stochastic gradient ascent, we obtain a fully recursive algorithm for the time evolution of the parameter estimate. We prove the convergence of the algorithm under suitable conditions regarding the ergodicity of the process consisting of state, filter, and tangent filter. Additionally, our parameter estimation is shown numerically to have the potential of improving suboptimal filters, and can be applied even when the system is not identifiable due to parameter redundancies. Online parameter estimation is a challenging problem that is ubiquitous in fields such as robotics, neuroscience, or finance in order to design adaptive filters and optimal controllers for unknown or changing systems

    Theory of non-linear spike-time-dependent plasticity

    Get PDF
    A fascinating property of the brain is its ability to continuously evolve and adapt to a constantly changing environment. This ability to change over time, called plasticity, is mainly implemented at the level of the connections between neurons (i.e. the synapses). So if we want to understand the ability of the brain to evolve and to store new memories, it is necessary to study the rules that govern synaptic plasticity. Among the large variety of factors which influence synaptic plasticity, we focus our study on the dependence upon the precise timing of the pre- and postsynaptic spikes. This form of plasticity, called Spike-Timing-Dependent Plasticity (STDP), works as follows: if a presynaptic spike is elicited before a postsynaptic one, the synapse is up-regulated (or potentiated) whereas if the opposite occurs, the synapse is down-regulated (or depressed). In this thesis, we propose several models of STDP which address the two following questions: (1) what is the functional role of a synapse which elicits STDP and (2) what is the most compact and accurate description of STDP? In the first two papers contained in this thesis, we show that in a supervised scenario, the best learning rule which enhances the precision of the postsynaptic spikes is consistent with STDP. In the three following papers, we show that the information transmission between the input and output spike trains is maximized if synaptic plasticity is governed by a rule similar to STDP. Moreover, we show that this infomax principle added to an homeostatic constraint leads to the well-known Bienenstock-Cooper-Munro (BCM) learning rule. Finally, in the last two papers, we propose a phenomenological model of STDP which considers not only pairs of pre- and postsynaptic spikes, but also triplets of spikes (e.g. 1 pre and 2 post or 1 post and 2 pre). This model can reproduce of lot of experimental results and can be mapped to the BCM learning rule
    • …
    corecore