1,664 research outputs found

    Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of neoHebbian Three-Factor Learning Rules

    Full text link
    Most elementary behaviors such as moving the arm to grasp an object or walking into the next room to explore a museum evolve on the time scale of seconds; in contrast, neuronal action potentials occur on the time scale of a few milliseconds. Learning rules of the brain must therefore bridge the gap between these two different time scales. Modern theories of synaptic plasticity have postulated that the co-activation of pre- and postsynaptic neurons sets a flag at the synapse, called an eligibility trace, that leads to a weight change only if an additional factor is present while the flag is set. This third factor, signaling reward, punishment, surprise, or novelty, could be implemented by the phasic activity of neuromodulators or specific neuronal inputs signaling special events. While the theoretical framework has been developed over the last decades, experimental evidence in support of eligibility traces on the time scale of seconds has been collected only during the last few years. Here we review, in the context of three-factor rules of synaptic plasticity, four key experiments that support the role of synaptic eligibility traces in combination with a third factor as a biological implementation of neoHebbian three-factor learning rules

    Slowness: An Objective for Spike-Timing-Dependent Plasticity?

    Get PDF
    Slow Feature Analysis (SFA) is an efficient algorithm for learning input-output functions that extract the most slowly varying features from a quickly varying signal. It has been successfully applied to the unsupervised learning of translation-, rotation-, and other invariances in a model of the visual system, to the learning of complex cell receptive fields, and, combined with a sparseness objective, to the self-organized formation of place cells in a model of the hippocampus. In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons

    Deep Neural Networks - A Brief History

    Full text link
    Introduction to deep neural networks and their history.Comment: 14 pages, 14 figure

    Reinforcement learning in populations of spiking neurons

    Get PDF
    Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning

    Spike Timing Dependent Plasticity: A Consequence of More Fundamental Learning Rules

    Get PDF
    Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse – the “first law” of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity

    Dendritic Synapse Location and Neocortical Spike-Timing-Dependent Plasticity

    Get PDF
    While it has been appreciated for decades that synapse location in the dendritic tree has a powerful influence on signal processing in neurons, the role of dendritic synapse location on the induction of long-term synaptic plasticity has only recently been explored. Here, we review recent work revealing how learning rules for spike-timing-dependent plasticity (STDP) in cortical neurons vary with the spatial location of synaptic input. A common principle appears to be that proximal synapses show conventional STDP, whereas distal inputs undergo plasticity according to novel learning rules. One crucial factor determining location-dependent STDP is the backpropagating action potential, which tends to decrease in amplitude and increase in width as it propagates into the dendritic tree of cortical neurons. We discuss additional location-dependent mechanisms as well as the functional implications of heterogeneous learning rules at different dendritic locations for the organization of synaptic inputs

    Tuning a binary ferromagnet into a multi-state synapse with spin-orbit torque induced plasticity

    Get PDF
    Inspired by ion-dominated synaptic plasticity in human brain, artificial synapses for neuromorphic computing adopt charge-related quantities as their weights. Despite the existing charge derived synaptic emulations, schemes of controlling electron spins in ferromagnetic devices have also attracted considerable interest due to their advantages of low energy consumption, unlimited endurance, and favorable CMOS compatibility. However, a generally applicable method of tuning a binary ferromagnet into a multi-state memory with pure spin-dominated synaptic plasticity in the absence of an external magnetic field is still missing. Here, we show how synaptic plasticity of a perpendicular ferromagnetic FM1 layer can be obtained when it is interlayer-exchange-coupled by another in-plane ferromagnetic FM2 layer, where a magnetic-field-free current-driven multi-state magnetization switching of FM1 in the Pt/FM1/Ta/FM2 structure is induced by spin-orbit torque. We use current pulses to set the perpendicular magnetization state which acts as the synapse weight, and demonstrate spintronic implementation of the excitatory/inhibitory postsynaptic potentials and spike timing-dependent plasticity. This functionality is made possible by the action of the in-plane interlayer exchange coupling field which leads to broadened, multi-state magnetic reversal characteristics. Numerical simulations, combined with investigations of a reference sample with a single perpendicular magnetized Pt/FM1/Ta structure, reveal that the broadening is due to the in-plane field component tuning the efficiency of the spin-orbit-torque to drive domain walls across a landscape of varying pinning potentials. The conventionally binary FM1 inside our Pt/FM1/Ta/FM2 structure with inherent in-plane coupling field is therefore tuned into a multi-state perpendicular ferromagnet and represents a synaptic emulator for neuromorphic computing.Comment: 37 pages with 11 figures, including 20 pages for manuscript and 17 pages for supplementary informatio
    corecore