1,334 research outputs found

    Spike trains statistics in Integrate and Fire Models: exact results

    Get PDF
    We briefly review and highlight the consequences of rigorous and exact results obtained in \cite{cessac:10}, characterizing the statistics of spike trains in a network of leaky Integrate-and-Fire neurons, where time is discrete and where neurons are subject to noise, without restriction on the synaptic weights connectivity. The main result is that spike trains statistics are characterized by a Gibbs distribution, whose potential is explicitly computable. This establishes, on one hand, a rigorous ground for the current investigations attempting to characterize real spike trains data with Gibbs distributions, such as the Ising-like distribution, using the maximal entropy principle. However, it transpires from the present analysis that the Ising model might be a rather weak approximation. Indeed, the Gibbs potential (the formal "Hamiltonian") is the log of the so-called "conditional intensity" (the probability that a neuron fires given the past of the whole network). But, in the present example, this probability has an infinite memory, and the corresponding process is non-Markovian (resp. the Gibbs potential has infinite range). Moreover, causality implies that the conditional intensity does not depend on the state of the neurons at the \textit{same time}, ruling out the Ising model as a candidate for an exact characterization of spike trains statistics. However, Markovian approximations can be proposed whose degree of approximation can be rigorously controlled. In this setting, Ising model appears as the "next step" after the Bernoulli model (independent neurons) since it introduces spatial pairwise correlations, but not time correlations. The range of validity of this approximation is discussed together with possible approaches allowing to introduce time correlations, with algorithmic extensions.Comment: 6 pages, submitted to conference NeuroComp2010 http://2010.neurocomp.fr/; Bruno Cessac http://www-sop.inria.fr/neuromathcomp

    Improved Spike-Timed Mappings using a Tri-Phasic Spike Timing-Dependent Plasticity Rule

    Get PDF
    Reservoir computing and the liquid state machine models have received much attention in the literature in recent years. In this paper we investigate using a reservoir composed of a network of spiking neurons, with synaptic delays, whose synapses are allowed to evolve using a tri-phasic spike timing- dependent plasticity (STDP) rule. The networks are trained to produce specific spike trains in response to spatio-temporal input patterns. The results of using a tri-phasic STDP rule on the network properties are compared to those found using the more common exponential form of the rule. It is found that each rule causes the synaptic weights to evolve in significantly different fashions giving rise to different network dynamics. It is also found that the networks evolved with the tri-phasic rule are more capable of mapping input spatio-temporal patterns to the output spike trains

    Spike Timing Dependent Plasticity: A Consequence of More Fundamental Learning Rules

    Get PDF
    Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse – the “first law” of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity

    Short-term plasticity as cause-effect hypothesis testing in distal reward learning

    Get PDF
    Asynchrony, overlaps and delays in sensory-motor signals introduce ambiguity as to which stimuli, actions, and rewards are causally related. Only the repetition of reward episodes helps distinguish true cause-effect relationships from coincidental occurrences. In the model proposed here, a novel plasticity rule employs short and long-term changes to evaluate hypotheses on cause-effect relationships. Transient weights represent hypotheses that are consolidated in long-term memory only when they consistently predict or cause future rewards. The main objective of the model is to preserve existing network topologies when learning with ambiguous information flows. Learning is also improved by biasing the exploration of the stimulus-response space towards actions that in the past occurred before rewards. The model indicates under which conditions beliefs can be consolidated in long-term memory, it suggests a solution to the plasticity-stability dilemma, and proposes an interpretation of the role of short-term plasticity.Comment: Biological Cybernetics, September 201

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses

    Get PDF
    We present a mathematical analysis of a networks with Integrate-and-Fire neurons and adaptive conductances. Taking into account the realistic fact that the spike time is only known within some \textit{finite} precision, we propose a model where spikes are effective at times multiple of a characteristic time scale δ\delta, where δ\delta can be \textit{arbitrary} small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the "edge of chaos", a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely "in the spikes" in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and Integrate-and-Fire models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.Comment: 36 pages, 9 figure

    Balancing Feed-Forward Excitation and Inhibition via Hebbian Inhibitory Synaptic Plasticity

    Get PDF
    It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates ‘negative feedback’ that balances excitation and inhibition, which contrasts with the ‘positive feedback’ of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs

    Noninvasive brain stimulation techniques can modulate cognitive processing

    Get PDF
    Recent methods that allow a noninvasive modulation of brain activity are able to modulate human cognitive behavior. Among these methods are transcranial electric stimulation and transcranial magnetic stimulation that both come in multiple variants. A property of both types of brain stimulation is that they modulate brain activity and in turn modulate cognitive behavior. Here, we describe the methods with their assumed neural mechanisms for readers from the economic and social sciences and little prior knowledge of these techniques. Our emphasis is on available protocols and experimental parameters to choose from when designing a study. We also review a selection of recent studies that have successfully applied them in the respective field. We provide short pointers to limitations that need to be considered and refer to the relevant papers where appropriate

    Functional Implications of Synaptic Spike Timing Dependent Plasticity and Anti-Hebbian Membrane Potential Dependent Plasticity

    Get PDF
    A central hypothesis of neuroscience is that the change of the strength of synaptic connections between neurons is the basis for learning in the animal brain. However, the rules underlying the activity dependent change as well as their functional consequences are not well understood. This thesis develops and investigates several different quantitative models of synaptic plasticity. In the first part, the Contribution Dynamics model of Spike Timing Dependent Plasticity (STDP) is presented. It is shown to provide a better fit to experimental data than previous models. Additionally, investigation of the response properties of the model synapse to oscillatory neuronal activity shows that synapses are sensitive to theta oscillations (4-10 Hz), which are known to boost learning in behavioral experiments. In the second part, a novel Membrane Potential Dependent Plasticity (MPDP) rule is developed, which can be used to train neurons to fire precisely timed output activity. Previously, this could only be achieved with artificial supervised learning rules, whereas MPDP is a local activity dependent mechanism that is supported by experimental results
    corecore