3,994 research outputs found

    Neuromorphic Learning towards Nano Second Precision

    Full text link
    Temporal coding is one approach to representing information in spiking neural networks. An example of its application is the location of sounds by barn owls that requires especially precise temporal coding. Dependent upon the azimuthal angle, the arrival times of sound signals are shifted between both ears. In order to deter- mine these interaural time differences, the phase difference of the signals is measured. We implemented this biologically inspired network on a neuromorphic hardware system and demonstrate spike-timing dependent plasticity on an analog, highly accelerated hardware substrate. Our neuromorphic implementation enables the resolution of time differences of less than 50 ns. On-chip Hebbian learning mechanisms select inputs from a pool of neurons which code for the same sound frequency. Hence, noise caused by different synaptic delays across these inputs is reduced. Furthermore, learning compensates for variations on neuronal and synaptic parameters caused by device mismatch intrinsic to the neuromorphic substrate.Comment: 7 pages, 7 figures, presented at IJCNN 2013 in Dallas, TX, USA. IJCNN 2013. Corrected version with updated STDP curves IJCNN 201

    Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface

    Full text link
    Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). What are the neuronal mechanisms responsible for these changes and how does targeted stimulation by a BBCI shape population-level synaptic connectivity? The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites are strengthened for spike-stimulus delays consistent with experimentally derived spike time dependent plasticity (STDP) rules. However, the relationship between STDP mechanisms at the level of networks, and their modification with neural implants remains poorly understood. Using our model, we successfully reproduces key experimental results and use analytical derivations, along with novel experimental data. We then derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered stimulation in different regimes of cortical activity.Comment: 35 pages, 9 figure

    StdpC: a modern dynamic clamp

    Get PDF
    With the advancement of computer technology many novel uses of dynamic clamp have become possible. We have added new features to our dynamic clamp software StdpC (“Spike timing-dependent plasticity Clamp”) allowing such new applications while conserving the ease of use and installation of the popular earlier Dynclamp 2/4 package. Here, we introduce the new features of a waveform generator, freely programmable Hodgkin–Huxley conductances, learning synapses, graphic data displays, and a powerful scripting mechanism and discuss examples of experiments using these features. In the first example we built and ‘voltage clamped’ a conductance based model cell from a passive resistor–capacitor (RC) circuit using the dynamic clamp software to generate the voltage-dependent currents. In the second example we coupled our new spike generator through a burst detection/burst generation mechanism in a phase-dependent way to a neuron in a central pattern generator and dissected the subtle interaction between neurons, which seems to implement an information transfer through intraburst spike patterns. In the third example, making use of the new plasticity mechanism for simulated synapses, we analyzed the effect of spike timing-dependent plasticity (STDP) on synchronization revealing considerable enhancement of the entrainment of a post-synaptic neuron by a periodic spike train. These examples illustrate that with modern dynamic clamp software like StdpC, the dynamic clamp has developed beyond the mere introduction of artificial synapses or ionic conductances into neurons to a universal research tool, which might well become a standard instrument of modern electrophysiology

    Logarithmic distributions prove that intrinsic learning is Hebbian

    Full text link
    In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability

    Short-Term Memory Through Persistent Activity: Evolution of Self-Stopping and Self-Sustaining Activity in Spiking Neural Networks

    Full text link
    Memories in the brain are separated in two categories: short-term and long-term memories. Long-term memories remain for a lifetime, while short-term ones exist from a few milliseconds to a few minutes. Within short-term memory studies, there is debate about what neural structure could implement it. Indeed, mechanisms responsible for long-term memories appear inadequate for the task. Instead, it has been proposed that short-term memories could be sustained by the persistent activity of a group of neurons. In this work, we explore what topology could sustain short-term memories, not by designing a model from specific hypotheses, but through Darwinian evolution in order to obtain new insights into its implementation. We evolved 10 networks capable of retaining information for a fixed duration between 2 and 11s. Our main finding has been that the evolution naturally created two functional modules in the network: one which sustains the information containing primarily excitatory neurons, while the other, which is responsible for forgetting, was composed mainly of inhibitory neurons. This demonstrates how the balance between inhibition and excitation plays an important role in cognition.Comment: 28 page

    Regulation of circuit organization and function through inhibitory synaptic plasticity

    Get PDF
    Diverse inhibitory neurons in the mammalian brain shape circuit connectivity and dynamics through mechanisms of synaptic plasticity. Inhibitory plasticity can establish excitation/inhibition (E/I) balance, control neuronal firing, and affect local calcium concentration, hence regulating neuronal activity at the network, single neuron, and dendritic level. Computational models can synthesize multiple experimental results and provide insight into how inhibitory plasticity controls circuit dynamics and sculpts connectivity by identifying phenomenological learning rules amenable to mathematical analysis. We highlight recent studies on the role of inhibitory plasticity in modulating excitatory plasticity, forming structured networks underlying memory formation and recall, and implementing adaptive phenomena and novelty detection. We conclude with experimental and modeling progress on the role of interneuron-specific plasticity in circuit computation and context-dependent learning
    • 

    corecore