447 research outputs found

    Learning arbitrary functions with spike-timing dependent plasticity learning rule

    Get PDF
    A neural network model based on spike-timing-dependent plasticity (STOP) learning rule, where afferent neurons will excite both the target neuron and interneurons that in turn project to the target neuron, is applied to the tasks of learning AND and XOR functions. Without inhibitory plasticity, the network can learn both AND and XOR functions. Introducing inhibitory plasticity can improve the performance of learning XOR function. Maintaining a training pattern set is a method to get feedback of network performance, and will always improve network performance. © 2005 IEEE

    Timing is not Everything: Neuromodulation Opens the STDP Gate

    Get PDF
    Spike timing dependent plasticity (STDP) is a temporally specific extension of Hebbian associative plasticity that has tied together the timing of presynaptic inputs relative to the postsynaptic single spike. However, it is difficult to translate this mechanism to in vivo conditions where there is an abundance of presynaptic activity constantly impinging upon the dendritic tree as well as ongoing postsynaptic spiking activity that backpropagates along the dendrite. Theoretical studies have proposed that, in addition to this pre- and postsynaptic activity, a “third factor” would enable the association of specific inputs to specific outputs. Experimentally, the picture that is beginning to emerge, is that in addition to the precise timing of pre- and postsynaptic spikes, this third factor involves neuromodulators that have a distinctive influence on STDP rules. Specifically, neuromodulatory systems can influence STDP rules by acting via dopaminergic, noradrenergic, muscarinic, and nicotinic receptors. Neuromodulator actions can enable STDP induction or – by increasing or decreasing the threshold – can change the conditions for plasticity induction. Because some of the neuromodulators are also involved in reward, a link between STDP and reward-mediated learning is emerging. However, many outstanding questions concerning the relationship between neuromodulatory systems and STDP rules remain, that once solved, will help make the crucial link from timing-based synaptic plasticity rules to behaviorally based learning

    Timing is not Everything: Neuromodulation Opens the STDP Gate

    Get PDF
    Spike timing dependent plasticity (STDP) is a temporally specific extension of Hebbian associative plasticity that has tied together the timing of presynaptic inputs relative to the postsynaptic single spike. However, it is difficult to translate this mechanism to in vivo conditions where there is an abundance of presynaptic activity constantly impinging upon the dendritic tree as well as ongoing postsynaptic spiking activity that backpropagates along the dendrite. Theoretical studies have proposed that, in addition to this pre- and postsynaptic activity, a “third factor” would enable the association of specific inputs to specific outputs. Experimentally, the picture that is beginning to emerge, is that in addition to the precise timing of pre- and postsynaptic spikes, this third factor involves neuromodulators that have a distinctive influence on STDP rules. Specifically, neuromodulatory systems can influence STDP rules by acting via dopaminergic, noradrenergic, muscarinic, and nicotinic receptors. Neuromodulator actions can enable STDP induction or – by increasing or decreasing the threshold – can change the conditions for plasticity induction. Because some of the neuromodulators are also involved in reward, a link between STDP and reward-mediated learning is emerging. However, many outstanding questions concerning the relationship between neuromodulatory systems and STDP rules remain, that once solved, will help make the crucial link from timing-based synaptic plasticity rules to behaviorally based learning

    Filamentary Switching: Synaptic Plasticity through Device Volatility

    Full text link
    Replicating the computational functionalities and performances of the brain remains one of the biggest challenges for the future of information and communication technologies. Such an ambitious goal requires research efforts from the architecture level to the basic device level (i.e., investigating the opportunities offered by emerging nanotechnologies to build such systems). Nanodevices, or, more precisely, memory or memristive devices, have been proposed for the implementation of synaptic functions, offering the required features and integration in a single component. In this paper, we demonstrate that the basic physics involved in the filamentary switching of electrochemical metallization cells can reproduce important biological synaptic functions that are key mechanisms for information processing and storage. The transition from short- to long-term plasticity has been reported as a direct consequence of filament growth (i.e., increased conductance) in filamentary memory devices. In this paper, we show that a more complex filament shape, such as dendritic paths of variable density and width, can permit the short- and long-term processes to be controlled independently. Our solid-state device is strongly analogous to biological synapses, as indicated by the interpretation of the results from the framework of a phenomenological model developed for biological synapses. We describe a single memristive element containing a rich panel of features, which will be of benefit to future neuromorphic hardware systems

    The Development of Bio-Inspired Cortical Feature Maps for Robot Sensorimotor Controllers

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This project applies principles from the field of Computational Neuroscience to Robotics research, in particular to develop systems inspired by how nature manages to solve sensorimotor coordination tasks. The overall aim has been to build a self-organising sensorimotor system using biologically inspired techniques based upon human cortical development which can in the future be implemented in neuromorphic hardware. This can then deliver the benefits of low power consumption and real time operation but with flexible learning onboard autonomous robots. A core principle is the Self-Organising Feature Map which is based upon the theory of how 2D maps develop in real cortex to represent complex information from the environment. A framework for developing feature maps for both motor and visual directional selectivity representing eight different directions of motion is described as well as how they can be coupled together to make a basic visuomotor system. In contrast to many previous works which use artificially generated visual inputs (for example, image sequences of oriented moving bars or mathematically generated Gaussian bars) a novel feature of the current work is that the visual input is generated by a DVS 128 silicon retina camera which is a neuromorphic device and produces spike events in a frame-free way. One of the main contributions of this work has been to develop a method of autonomous regulation of the map development process which adapts the learning dependent upon input activity. The main results show that distinct directionally selective maps for both the motor and visual modalities are produced under a range of experimental scenarios. The adaptive learning process successfully controls the rate of learning in both motor and visual map development and is used to indicate when sufficient patterns have been presented, thus avoiding the need to define in advance the quantity and range of training data. The coupling training experiments show that the visual input learns to modulate the original motor map response, creating a new visual-motor topological map.EPSRC, University of Plymouth Graduate Schoo

    Closed-Form Treatment of the Interactions between Neuronal Activity and Timing-Dependent Plasticity in Networks of Linear Neurons

    Get PDF
    Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing-dependent plasticity (STDP), which depends on the interaction of few (two) signals, the question arises how these interactions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is - even in simple recurrent network structures - currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40% of these cases are obtained with a long-term potentiation-dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step toward a better understanding of the ongoing interactions between activity and plasticity in recurrent networks using STDP. The results suggest that stability of (sub-)networks should generically be present also in larger structures

    A Computational Investigation of Neural Dynamics and Network Structure

    No full text
    With the overall goal of illuminating the relationship between neural dynamics and neural network structure, this thesis presents a) a computer model of a network infrastructure capable of global broadcast and competition, and b) a study of various convergence properties of spike-timing dependent plasticity (STDP) in a recurrent neural network. The first part of the thesis explores the parameter space of a possible Global Neuronal Workspace (GNW) realised in a novel computational network model using stochastic connectivity. The structure of this model is analysed in light of the characteristic dynamics of a GNW: broadcast, reverberation, and competition. It is found even with careful consideration of the balance between excitation and inhibition, the structural choices do not allow agreement with the GNW dynamics, and the implications of this are addressed. An additional level of competition – access competition – is added, discussed, and found to be more conducive to winner-takes-all competition. The second part of the thesis investigates the formation of synaptic structure due to neural and synaptic dynamics. From previous theoretical and modelling work, it is predicted that homogeneous stimulation in a recurrent neural network with STDP will create a self-stabilising equilibrium amongst synaptic weights, while heterogeneous stimulation will induce structured synaptic changes. A new factor in modulating the synaptic weight equilibrium is suggested from the experimental evidence presented: anti-correlation due to inhibitory neurons. It is observed that the synaptic equilibrium creates competition amongst synapses, and those specifically stimulated during heterogeneous stimulation win out. Further investigation is carried out in order to assess the effect that more complex STDP rules would have on synaptic dynamics, varying parameters of a trace STDP model. There is little qualitative effect on synaptic dynamics under low frequency (< 25Hz) conditions, justifying the use of simple STDP until further experimental or theoretical evidence suggests otherwise
    • 

    corecore