9 research outputs found

    How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation

    Get PDF
    This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials ("spike trains") produced by neuronal networks ? and; (ii) what are the effects of synaptic plasticity on these statistics ? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure

    Learning as filtering: Implications for spike-based plasticity.

    Get PDF
    Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network-the Synaptic Filter-and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity

    Back-engineering of spiking neural networks parameters

    Get PDF
    We consider the deterministic evolution of a time-discretized spiking network of neurons with connection weights having delays, modeled as a discretized neural network of the generalized integrate and fire (gIF) type. The purpose is to study a class of algorithmic methods allowing to calculate the proper parameters to reproduce exactly a given spike train generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. This allows us to "back-engineer" a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., connection weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the back-engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, with a gIF model. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an "Hebbian" rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to "program" a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.Comment: 30 pages, 17 figures, submitte

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided

    Reverse-engineering in spiking neural networks parameters: exact deterministic parameters estimation

    Get PDF
    We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate-and-fire (gIF) type. The purpose is to study a class of algorithmic methods allowing one to calculate the proper parameters to reproduce exactly a given spike train, generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing us to provide an efficient resolution. This allows us to “reverse engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., synaptic weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the reverse engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided

    Theory of non-linear spike-time-dependent plasticity

    Get PDF
    A fascinating property of the brain is its ability to continuously evolve and adapt to a constantly changing environment. This ability to change over time, called plasticity, is mainly implemented at the level of the connections between neurons (i.e. the synapses). So if we want to understand the ability of the brain to evolve and to store new memories, it is necessary to study the rules that govern synaptic plasticity. Among the large variety of factors which influence synaptic plasticity, we focus our study on the dependence upon the precise timing of the pre- and postsynaptic spikes. This form of plasticity, called Spike-Timing-Dependent Plasticity (STDP), works as follows: if a presynaptic spike is elicited before a postsynaptic one, the synapse is up-regulated (or potentiated) whereas if the opposite occurs, the synapse is down-regulated (or depressed). In this thesis, we propose several models of STDP which address the two following questions: (1) what is the functional role of a synapse which elicits STDP and (2) what is the most compact and accurate description of STDP? In the first two papers contained in this thesis, we show that in a supervised scenario, the best learning rule which enhances the precision of the postsynaptic spikes is consistent with STDP. In the three following papers, we show that the information transmission between the input and output spike trains is maximized if synaptic plasticity is governed by a rule similar to STDP. Moreover, we show that this infomax principle added to an homeostatic constraint leads to the well-known Bienenstock-Cooper-Munro (BCM) learning rule. Finally, in the last two papers, we propose a phenomenological model of STDP which considers not only pairs of pre- and postsynaptic spikes, but also triplets of spikes (e.g. 1 pre and 2 post or 1 post and 2 pre). This model can reproduce of lot of experimental results and can be mapped to the BCM learning rule
    corecore