1,014 research outputs found

    Phenomenological models of synaptic plasticity based on spike timing

    Get PDF
    Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations

    Phenomenological models of synaptic plasticity based on spike timing

    Get PDF
    Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulation

    Synaptic modification and entrained phase are phase dependent in STDP

    Get PDF
    Synapse strength can be modified in an activity dependent manner, in which the temporal relationship between pre- and post-synaptic spikes plays a major role. This spike timing dependent plasticity (STDP) has profound implications in neural coding, computation and functionality, and this line of research is booming in recent years. Many functional roles of STDP have been put forward. Because the STDP learning curve is strongly nonlinear, initial state may have great impacts on the eventual state of the system. However, this feature has not been explored before. This paper proposes two possible functional roles of STDP by considering the influence of initial state in modeling studies. First, STDP could lead to phase-dependent synaptic modification that have been reported in experiments. Second, rather than leading to a fixed phase relation between pre- and post-synaptic neurons, STDP that includes suppression between the effects of spike pairs lead to a distributed entrained phase which also depend on the initial relative phase. This simple mechanism is proposed here to have the ability to organize temporal firing pattern into dynamic cell assemblies in a probabilistic manner and cause cell assemblies to update in a deterministic manner. It has been demonstrated that olfactory system in locust, and even other sensory systems, adopts the strategy of combining probabilistic cell assemblies with their deterministic update to encode information. These results suggest that STDP rule is a potentially powerful mechanism by which higher network functions emerge

    STDP in Recurrent Neuronal Networks

    Get PDF
    Recent results about spike-timing-dependent plasticity (STDP) in recurrently connected neurons are reviewed, with a focus on the relationship between the weight dynamics and the emergence of network structure. In particular, the evolution of synaptic weights in the two cases of incoming connections for a single neuron and recurrent connections are compared and contrasted. A theoretical framework is used that is based upon Poisson neurons with a temporally inhomogeneous firing rate and the asymptotic distribution of weights generated by the learning dynamics. Different network configurations examined in recent studies are discussed and an overview of the current understanding of STDP in recurrently connected neuronal networks is presented

    Balancing Feed-Forward Excitation and Inhibition via Hebbian Inhibitory Synaptic Plasticity

    Get PDF
    It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates ‘negative feedback’ that balances excitation and inhibition, which contrasts with the ‘positive feedback’ of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs

    Rate and Pulse Based Plasticity Governed by Local Synaptic State Variables

    Get PDF
    Classically, action-potential-based learning paradigms such as the Bienenstock–Cooper–Munroe (BCM) rule for pulse rates or spike timing-dependent plasticity for pulse pairings have been experimentally demonstrated to evoke long-lasting synaptic weight changes (i.e., plasticity). However, several recent experiments have shown that plasticity also depends on the local dynamics at the synapse, such as membrane voltage, Calcium time course and level, or dendritic spikes. In this paper, we introduce a formulation of the BCM rule which is based on the instantaneous postsynaptic membrane potential as well as the transmission profile of the presynaptic spike. While this rule incorporates only simple local voltage- and current dynamics and is thus neither directly rate nor timing based, it can replicate a range of experiments, such as various rate and spike pairing protocols, combinations of the two, as well as voltage-dependent plasticity. A detailed comparison of current plasticity models with respect to this range of experiments also demonstrates the efficacy of the new plasticity rule. All experiments can be replicated with a limited set of parameters, avoiding the overfitting problem of more involved plasticity rules

    Stability versus Neuronal Specialization for STDP: Long-Tail Weight Distributions Solve the Dilemma

    Get PDF
    Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing

    Contributions to models of single neuron computation in striatum and cortex

    Get PDF
    A deeper understanding is required of how a single neuron utilizes its nonlinear subcellular devices to generate complex neuronal dynamics. Two compartmental models of cortex and striatum are accurately formulated and firmly grounded in the experimental reality of electrophysiology to address the questions: how striatal projection neurons implement location-dependent dendritic integration to carry out association-based computation and how cortical pyramidal neurons strategically exploit the type and location of synaptic contacts to enrich its computational capacities.Neuronale Zellen transformieren kontinuierliche Signale in diskrete Zeitserien von Aktionspotentialen und kodieren damit Perzeptionen und interne Zustände. Kompartiment-Modelle werden formuliert von Nervenzellen im Kortex und Striatum, die elektrophysiologisch fundiert sind, um spezifische Fragen zu adressieren: i) Inwiefern implementieren Projektionen vom Striatum ortsabhängige dendritische Integration, um Assoziationens-basierte Berechnungen zu realisieren? ii) Inwiefern nutzen kortikale Zellen den Typ und den Ort, um die durch sie realisierten Berechnungen zu optimieren
    corecore