61 research outputs found

    Structural Plasticity Can Produce Metaplasticity

    Get PDF
    BACKGROUND:Synaptic plasticity underlies many aspect of learning memory and development. The properties of synaptic plasticity can change as a function of previous plasticity and previous activation of synapses, a phenomenon called metaplasticity. Synaptic plasticity not only changes the functional connectivity between neurons but in some cases produces a structural change in synaptic spines; a change thought to form a basis for this observed plasticity. Here we examine to what extent structural plasticity of spines can be a cause for metaplasticity. This study is motivated by the observation that structural changes in spines are likely to affect the calcium dynamics in spines. Since calcium dynamics determine the sign and magnitude of synaptic plasticity, it is likely that structural plasticity will alter the properties of synaptic plasticity. METHODOLOGY/PRINCIPAL FINDINGS:In this study we address the question how spine geometry and alterations of N-methyl-D-aspartic acid (NMDA) receptors conductance may affect plasticity. Based on a simplified model of the spine in combination with a calcium-dependent plasticity rule, we demonstrated that after the induction phase of plasticity a shift of the long term potentiation (LTP) or long term depression (LTD) threshold takes place. This induces a refractory period for further LTP induction and promotes depotentiation as observed experimentally. That resembles the BCM metaplasticity rule but specific for the individual synapse. In the second phase, alteration of the NMDA response may bring the synapse to a state such that further synaptic weight alterations are feasible. We show that if the enhancement of the NMDA response is proportional to the area of the post synaptic density (PSD) the plasticity curves most likely return to the initial state. CONCLUSIONS/SIGNIFICANCE:Using simulations of calcium dynamics in synaptic spines, coupled with a biophysically motivated calcium-dependent plasticity rule, we find under what conditions structural plasticity can form the basis of synapse specific metaplasticity

    Effect of Correlated Lateral Geniculate Nucleus Firing Rates on Predictions for Monocular Eye Closure Versus Monocular Retinal Inactivation

    Get PDF
    Monocular deprivation experiments can be used to distinguish between different ideas concerning properties of cortical synaptic plasticity. Monocular deprivation by lid suture causes a rapid disconnection of the deprived eye connected to cortical neurons whereas total inactivation of the deprived eye produces much less of an ocular dominance shift. In order to understand these results one needs to know how lid suture and retinal inactivation affect neurons in the lateral geniculate nucleus (LGN) that provide the cortical input. Recent experimental results by Linden et al. showed that monocular lid suture and monocular inactivation do not change the mean firing rates of LGN neurons but that lid suture reduces correlations between adjacent neurons whereas monocular inactivation leads to correlated firing. These, somewhat surprising, results contradict assumptions that have been made to explain the outcomes of different monocular deprivation protocols. Based on these experimental results we modify our assumptions about inputs to cortex during different deprivation protocols and show their implications when combined with different cortical plasticity rules. Using theoretical analysis, random matrix theory and simulations we show that high levels of correlations reduce the ocular dominance shift in learning rules that depend on homosynaptic depression (i.e., Bienenstock-Cooper-Munro type rules), consistent with experimental results, but have the opposite effect in rules that depend on heterosynaptic depression (i.e., Hebbian/principal component analysis type rules)

    Regulation of cytoplasmic polyadenylation can generate a bistable switch

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Translation efficiency of certain mRNAs can be regulated through a cytoplasmic polyadenylation process at the pre-initiation phase. A translational regulator controls the polyadenylation process and this regulation depends on its posttranslational modifications e.g., phosphorylation. The cytoplasmic polyadenylation binding protein (CPEB1) is one such translational regulator, which regulates the translation of some mRNAs by binding to the cytoplasmic polyadenylation element (CPE). The cytoplasmic polyadenylation process can be turned on or off by the phosphorylation or dephosphorylation state of CPEB1. A specific example could be the regulation of Calcium/Calmodulin-dependent protein kinase II (Ī±CaMKII) translation through the phosphorylation/dephosphorylation cycle of CPEB1.</p> <p>Result</p> <p>Here, we show that CPEB1 mediated polyadenylation of Ī±CaMKII mRNA can result in a bistable switching mechanism. The switch for regulating the polyadenylation is based on a two state model of Ī±CaMKII and its interaction with CPEB1. Based on elementary biochemical kinetics a high dimensional system of non-linear ordinary differential equations can describe the dynamic characteristics of the polyadenylation loop. Here, we simplified this high-dimensional system into approximate lower dimension system that can provide the understanding of dynamics and fixed points of original system. These simplified equations can be used to develop analytical bifurcation diagrams without the use of complex numerical tracking algorithm, and can further give us intuition about the parameter dependence of bistability in this system.</p> <p>Conclusion</p> <p>This study provides a systematic method to simplify, approximate and analyze a translation/activation based positive feedback loop. This work shows how to extract low dimensional systems that can be used to obtain analytical solutions for the fixed points of the system and to describe the dynamics of the system. The methods used here have general applicability to the formulation and analysis of many molecular networks.</p

    Evaluating statistical methods used to estimate the number of postsynaptic receptors.

    Get PDF
    Calcium levels in spines play a significant role in determining the sign and magnitude of synaptic plasticity. The magnitude of calcium influx into spines is highly dependent on influx through N-methyl D-aspartate (NMDA) receptors, and therefore depends on the number of postsynaptic NMDA receptors in each spine. We have calculated previously how the number of postsynaptic NMDA receptors determines the mean and variance of calcium transients in the postsynaptic density, and how this alters the shape of plasticity curves. However, the number of postsynaptic NMDA receptors in the postsynaptic density is not well known. Anatomical methods for estimating the number of NMDA receptors produce estimates that are very different than those produced by physiological techniques. The physiological techniques are based on the statistics of synaptic transmission and it is difficult to experimentally estimate their precision. In this paper we use stochastic simulations in order to test the validity of a physiological estimation technique based on failure analysis. We find that the method is likely to underestimate the number of postsynaptic NMDA receptors, explain the source of the error, and re-derive a more precise estimation technique. We also show that the original failure analysis as well as our improved formulas are not robust to small estimation errors in key parameters

    Selectivity and Metaplasticity in a Unified Calcium-Dependent Model

    Get PDF
    A unified, biophysically motivated Calcium-Dependent Learning model has been shown to account for various rate-based and spike time-dependent paradigms for inducing synaptic plasticity. Here, we investigate the properties of this model for a multi-synapse neuron that receives inputs with different spike-train statistics. In addition, we present a physiological form of metaplasticity, an activity-driven regulation mechanism, that is essential for the robustness of the model. A neuron thus implemented develops stable and selective receptive fields, given various input statistic

    Spike Timing Dependent Plasticity: A Consequence of More Fundamental Learning Rules

    Get PDF
    Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse ā€“ the ā€œfirst lawā€ of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity

    What does scalar timing tell us about neural dynamics?

    Get PDF
    The ā€œScalar Timing Law,ā€ which is a temporal domain generalization of the well known Weber Law, states that the errors estimating temporal intervals scale linearly with the durations of the intervals. Linear scaling has been studied extensively in human and animal models and holds over several orders of magnitude, though to date there is no agreed upon explanation for its physiological basis. Starting from the assumption that behavioral variability stems from neural variability, this work shows how to derive firing rate functions that are consistent with scalar timing. We show that firing rate functions with a log-power form, and a set of parameters that depend on spike count statistics, can account for scalar timing. Our derivation depends on a linear approximation, but we use simulations to validate the theory and show that log-power firing rate functions result in scalar timing over a large range of times and parameters. Simulation results match the predictions of our model, though our initial formulation results in a slight bias toward overestimation that can be corrected using a simple iterative approach to learn a decision threshold.R01MH093665K99MH09965
    • ā€¦
    corecore