34 research outputs found

    Spine head calcium as a measure of summed postsynaptic activity for driving synaptic plasticity

    Get PDF
    We use a computational model of a hippocampal CA1 pyramidal cell to demonstrate that spine head calcium provides an instantaneous readout at each synapse of the postsynaptic weighted sum of all presynaptic activity impinging on the cell. The form of the readout is equivalent to the functions of weighted, summed inputs used in neural network learning rules. Within a dendritic layer, peak spine head calcium levels are either a linear or sigmoidal function of the number of coactive synapses, with nonlinearity depending on the ability of voltage spread in the dendrites to reach calcium spike threshold. This is strongly controlled by the potassium A-type current, with calcium spikes and the consequent sigmoidal increase in peak spine head calcium present only when the A-channel density is low. Other membrane characteristics influence the gain of the relationship between peak calcium and the number of active synapses. In particular, increasing spine neck resistance increases the gain due to increased voltage responses to synaptic input in spine heads. Colocation of stimulated synapses on a single dendritic branch also increases the gain of the response. Input pathways cooperate: CA3 inputs to the proximal apical dendrites can strongly amplify peak calcium levels due to weak EC input to the distal dendrites, but not so strongly vice versa. CA3 inputs to the basal dendrites can boost calcium levels in the proximal apical dendrites, but the relative electrical compactness of the basal dendrites results in the reverse effect being less significant. These results give pointers as to how to better describe the contributions of pre- and postsynaptic activity in the learning "rules" that apply in these cells. The calcium signal is closer in form to the activity measures used in traditional neural network learning rules than to the spike times used in spike-timing-dependent plasticity.Output Type: Lette

    How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models

    Get PDF
    It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia

    Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison

    Get PDF
    A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications

    The Effects of NMDA Subunit Composition on Calcium Influx and Spike Timing-Dependent Plasticity in Striatal Medium Spiny Neurons

    Get PDF
    Calcium through NMDA receptors (NMDARs) is necessary for the long-term potentiation (LTP) of synaptic strength; however, NMDARs differ in several properties that can influence the amount of calcium influx into the spine. These properties, such as sensitivity to magnesium block and conductance decay kinetics, change the receptor's response to spike timing dependent plasticity (STDP) protocols, and thereby shape synaptic integration and information processing. This study investigates the role of GluN2 subunit differences on spine calcium concentration during several STDP protocols in a model of a striatal medium spiny projection neuron (MSPN). The multi-compartment, multi-channel model exhibits firing frequency, spike width, and latency to first spike similar to current clamp data from mouse dorsal striatum MSPN. We find that NMDAR-mediated calcium is dependent on GluN2 subunit type, action potential timing, duration of somatic depolarization, and number of action potentials. Furthermore, the model demonstrates that in MSPNs, GluN2A and GluN2B control which STDP intervals allow for substantial calcium elevation in spines. The model predicts that blocking GluN2B subunits would modulate the range of intervals that cause long term potentiation. We confirmed this prediction experimentally, demonstrating that blocking GluN2B in the striatum, narrows the range of STDP intervals that cause long term potentiation. This ability of the GluN2 subunit to modulate the shape of the STDP curve could underlie the role that GluN2 subunits play in learning and development

    Phenomenological models of synaptic plasticity based on spike timing

    Get PDF
    Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations

    Modelling human choices: MADeM and decision‑making

    Get PDF
    Research supported by FAPESP 2015/50122-0 and DFG-GRTK 1740/2. RP and AR are also part of the Research, Innovation and Dissemination Center for Neuromathematics FAPESP grant (2013/07699-0). RP is supported by a FAPESP scholarship (2013/25667-8). ACR is partially supported by a CNPq fellowship (grant 306251/2014-0)

    Inhibitory control of site-specific synaptic plasticity in a model CA1 pyramidal neuron

    No full text
    A computational model of a biochemical network underlying synaptic plasticity is combined with simulated on-going electrical activity in a model of a hippocampal pyramidal neuron to study the impact of synapse location and inhibition on synaptic plasticity. The simulated pyramidal neuron is activated by the realistic stimulation protocol of causal and anticausal spike pairings of presynaptic and postsynaptic action potentials in the presence and absence of spatially targeted inhibition provided by basket, bistratified and oriens-lacunosum moleculare (OLM) interneurons. The resulting Spike-timing-dependent plasticity (STDP) curves depend strongly on the number of pairing repetitions, the synapse location and the timing and strength of inhibition

    Analytical Calculation of Weights in Temporal Sequence Learning

    No full text
    Most artificial learning systems converge after a certain number of interations but the final weight distribution cannot be predicted or calculated from the initatial conditions. In several cases general boundary conditions can be devised to guarantee convergence (e.g. Hopfield networks). In this study we use the Isotropic Sequence Order (ISO) learning rule to show that the final weights of an agent which learns in complex environment can be calculated analytically. The ISO learning rule is a differential Hebbian learning rule for temporal sequence learning. Weight change is defined as the correlation between the band-pass filtered input ui and the derivative of the output v: dρi/dt = μ ui v ´ where μ is the learning rate. Ultimately the temporal difference between two input signals T drives the learning (Fig. 1a). In our experiments we use a simulated robot with vision and collision sensors to this end. The robot has a built-in retraction reflex as soon as it touches an obstacle. Its goal is to avoid this by using the signals from its vision sensors for steering. Consequentially their weights are initially zero and will develop through ISO-learning. Temporal intervals T are determined as the differences between the earlier vision signal and the later following collision signal. In an older study (Porr and Wörgötter, 2003) we have shown that it is possible to calculate eac
    corecore