158 research outputs found

    Intrinsic Stability of Temporally Shifted Spike-Timing Dependent Plasticity

    Get PDF
    Spike-timing dependent plasticity (STDP), a widespread synaptic modification mechanism, is sensitive to correlations between presynaptic spike trains and it generates competition among synapses. However, STDP has an inherent instability because strong synapses are more likely to be strengthened than weak ones, causing them to grow in strength until some biophysical limit is reached. Through simulations and analytic calculations, we show that a small temporal shift in the STDP window that causes synchronous, or nearly synchronous, pre- and postsynaptic action potentials to induce long-term depression can stabilize synaptic strengths. Shifted STDP also stabilizes the postsynaptic firing rate and can implement both Hebbian and anti-Hebbian forms of competitive synaptic plasticity. Interestingly, the overall level of inhibition determines whether plasticity is Hebbian or anti-Hebbian. Even a random symmetric jitter of a few milliseconds in the STDP window can stabilize synaptic strengths while retaining these features. The same results hold for a shifted version of the more recent “triplet” model of STDP. Our results indicate that the detailed shape of the STDP window function near the transition from depression to potentiation is of the utmost importance in determining the consequences of STDP, suggesting that this region warrants further experimental study

    Inhibitory synaptic plasticity: spike timing-dependence and putative network function

    Get PDF
    While the plasticity of excitatory synaptic connections in the brain has been widely studied, the plasticity of inhibitory connections is much less understood. Here, we present recent experimental and theoretical findings concerning the rules of spike timing-dependent inhibitory plasticity and their putative network function. This is a summary of a workshop at the COSYNE conference 2012

    On How Network Architecture Determines the Dominant Patterns of Spontaneous Neural Activity

    Get PDF
    In the absence of sensory stimulation, neocortical circuits display complex patterns of neural activity. These patterns are thought to reflect relevant properties of the network, including anatomical features like its modularity. It is also assumed that the synaptic connections of the network constrain the repertoire of emergent, spontaneous patterns. Although the link between network architecture and network activity has been extensively investigated in the last few years from different perspectives, our understanding of the relationship between the network connectivity and the structure of its spontaneous activity is still incomplete. Using a general mathematical model of neural dynamics we have studied the link between spontaneous activity and the underlying network architecture. In particular, here we show mathematically how the synaptic connections between neurons determine the repertoire of spatial patterns displayed in the spontaneous activity. To test our theoretical result, we have also used the model to simulate spontaneous activity of a neural network, whose architecture is inspired by the patchy organization of horizontal connections between cortical columns in the neocortex of primates and other mammals. The dominant spatial patterns of the spontaneous activity, calculated as its principal components, coincide remarkably well with those patterns predicted from the network connectivity using our theory. The equivalence between the concept of dominant pattern and the concept of attractor of the network dynamics is also demonstrated. This in turn suggests new ways of investigating encoding and storage capabilities of neural networks

    Intrinsically determined cell death of developing cortical interneurons

    Get PDF
    Cortical inhibitory circuits are formed by GABAergic interneurons, a cell population that originates far from the cerebral cortex in the embryonic ventral forebrain. Given their distant developmental origins, it is intriguing how the number of cortical interneurons is ultimately determined. One possibility, suggested by the neurotrophic hypothesis1-5, is that cortical interneurons are overproduced, and then following their migration into cortex, excess interneurons are eliminated through a competition for extrinsically derived trophic signals. Here we have characterized the developmental cell death of mouse cortical interneurons in vivo, in vitro, and following transplantation. We found that 40% of developing cortical interneurons were eliminated through Bax- (Bcl-2 associated X-) dependent apoptosis during postnatal life. When cultured in vitro or transplanted into the cortex, interneuron precursors died at a cellular age similar to that at which endogenous interneurons died during normal development. Remarkably, over transplant sizes that varied 200-fold, a constant fraction of the transplanted population underwent cell death. The death of transplanted neurons was not affected by the cell-autonomous disruption of TrkB (tropomyosin kinase receptor B), the main neurotrophin receptor expressed by central nervous system (CNS) neurons6-8. Transplantation expanded the cortical interneuron population by up to 35%, but the frequency of inhibitory synaptic events did not scale with the number of transplanted interneurons. Together, our findings indicate that interneuron cell death is intrinsically determined, either cell-autonomously, or through a population-autonomous competition for survival signals derived from other interneurons

    Long-term modification of cortical synapses improves sensory perception

    Get PDF
    Synapses and receptive fields of the cerebral cortex are plastic. However, changes to specific inputs must be coordinated within neural networks to ensure that excitability and feature selectivity are appropriately configured for perception of the sensory environment. Long-lasting enhancements and decrements to rat primary auditory cortical excitatory synaptic strength were induced by pairing acoustic stimuli with activation of the nucleus basalis neuromodulatory system. Here we report that these synaptic modifications were approximately balanced across individual receptive fields, conserving mean excitation while reducing overall response variability. Decreased response variability should increase detection and recognition of near-threshold or previously imperceptible stimuli, as we found in behaving animals. Thus, modification of cortical inputs leads to wide-scale synaptic changes, which are related to improved sensory perception and enhanced behavioral performance

    Oxytocin Enhances Social Recognition by Modulating Cortical Control of Early Olfactory Processing

    Get PDF
    Oxytocin promotes social interactions and recognition of conspecifics that rely on olfaction in most species. The circuit mechanisms through which oxytocin modifies olfactory processing are incompletely understood. Here, we observed that optogenetically induced oxytocin release enhanced olfactory exploration and same-sex recognition of adult rats. Consistent with oxytocin’s function in the anterior olfactory cortex, particularly in social cue processing, region-selective receptor deletion impaired social recognition but left odor discrimination and recognition intact outside a social context. Oxytocin transiently increased the drive of the anterior olfactory cortex projecting to olfactory bulb interneurons. Cortical top-down recruitment of interneurons dynamically enhanced the inhibitory input to olfactory bulb projection neurons and increased the signal-to-noise of their output. In summary, oxytocin generates states for optimized information extraction in an early cortical top-down network that is required for social interactions with potential implications for sensory processing deficits in autism spectrum disorders

    Adaptive and Phase Selective Spike Timing Dependent Plasticity in Synaptically Coupled Neuronal Oscillators

    Get PDF
    We consider and analyze the influence of spike-timing dependent plasticity (STDP) on homeostatic states in synaptically coupled neuronal oscillators. In contrast to conventional models of STDP in which spike-timing affects weights of synaptic connections, we consider a model of STDP in which the time lags between pre- and/or post-synaptic spikes change internal state of pre- and/or post-synaptic neurons respectively. The analysis reveals that STDP processes of this type, modeled by a single ordinary differential equation, may ensure efficient, yet coarse, phase-locking of spikes in the system to a given reference phase. Precision of the phase locking, i.e. the amplitude of relative phase deviations from the reference, depends on the values of natural frequencies of oscillators and, additionally, on parameters of the STDP law. These deviations can be optimized by appropriate tuning of gains (i.e. sensitivity to spike-timing mismatches) of the STDP mechanism. However, as we demonstrate, such deviations can not be made arbitrarily small neither by mere tuning of STDP gains nor by adjusting synaptic weights. Thus if accurate phase-locking in the system is required then an additional tuning mechanism is generally needed. We found that adding a very simple adaptation dynamics in the form of slow fluctuations of the base line in the STDP mechanism enables accurate phase tuning in the system with arbitrary high precision. Adaptation operating at a slow time scale may be associated with extracellular matter such as matrix and glia. Thus the findings may suggest a possible role of the latter in regulating synaptic transmission in neuronal circuits

    Formation of feedforward networks and frequency synchrony by spike-timing-dependent plasticity

    Get PDF
    Spike-timing-dependent plasticity (STDP) with asymmetric learning windows is commonly found in the brain and useful for a variety of spike-based computations such as input filtering and associative memory. A natural consequence of STDP is establishment of causality in the sense that a neuron learns to fire with a lag after specific presynaptic neurons have fired. The effect of STDP on synchrony is elusive because spike synchrony implies unitary spike events of different neurons rather than a causal delayed relationship between neurons. We explore how synchrony can be facilitated by STDP in oscillator networks with a pacemaker. We show that STDP with asymmetric learning windows leads to self-organization of feedforward networks starting from the pacemaker. As a result, STDP drastically facilitates frequency synchrony. Even though differences in spike times are lessened as a result of synaptic plasticity, the finite time lag remains so that perfect spike synchrony is not realized. In contrast to traditional mechanisms of large-scale synchrony based on mutual interaction of coupled neurons, the route to synchrony discovered here is enslavement of downstream neurons by upstream ones. Facilitation of such feedforward synchrony does not occur for STDP with symmetric learning windows.Comment: 9 figure

    Phenomenological models of synaptic plasticity based on spike timing

    Get PDF
    Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations
    corecore