12,708 research outputs found

    The enhanced rise and delayed fall of memory in a model of synaptic integration: extension to discrete state synapses

    No full text
    Integrate-and-express models of synaptic plasticity propose that synapses may act as low-pass filters, integrating synaptic plasticity induction signals in order to discern trends before expressing synaptic plasticity. We have previously shown that synaptic filtering strongly controls destabilizing fluctuations in developmental models. When applied to palimpsest memory systems that learn new memories by forgetting old ones, we have also shown that with binary-strength synapses, integrative synapses lead to an initial memory signal rise before its fall back to equilibrium. Such an initial rise is in dramatic contrast to nonintegrative synapses, in which the memory signal falls monotonically. We now extend our earlier analysis of palimpsest memories with synaptic filters to consider the more general case of discrete state, multilevel synapses. We derive exact results for the memory signal dynamics and then consider various simplifying approximations. We show that multilevel synapses enhance the initial rise in the memory signal and then delay its subsequent fall by inducing a plateau-like region in the memory signal. Such dynamics significantly increase memory lifetimes, defined by a signal-to-noise ratio (SNR). We derive expressions for optimal choices of synaptic parameters (filter size, number of strength states, number of synapses) that maximize SNR memory lifetimes. However, we find that with memory lifetimes defined via mean-first-passage times, such optimality conditions do not exist, suggesting that optimality may be an artifact of SNRs

    A CMOS Spiking Neuron for Brain-Inspired Neural Networks with Resistive Synapses and In-Situ Learning

    Get PDF
    Nanoscale resistive memories are expected to fuel dense integration of electronic synapses for large-scale neuromorphic system. To realize such a brain-inspired computing chip, a compact CMOS spiking neuron that performs in-situ learning and computing while driving a large number of resistive synapses is desired. This work presents a novel leaky integrate-and-fire neuron design which implements the dual-mode operation of current integration and synaptic drive, with a single opamp and enables in-situ learning with crossbar resistive synapses. The proposed design was implemented in a 0.18 μ\mum CMOS technology. Measurements show neuron's ability to drive a thousand resistive synapses, and demonstrate an in-situ associative learning. The neuron circuit occupies a small area of 0.01 mm2^2 and has an energy-efficiency of 9.3 pJ//spike//synapse

    Event Timing in Associative Learning

    Get PDF
    Associative learning relies on event timing. Fruit flies for example, once trained with an odour that precedes electric shock, subsequently avoid this odour (punishment learning); if, on the other hand the odour follows the shock during training, it is approached later on (relief learning). During training, an odour-induced Ca++ signal and a shock-induced dopaminergic signal converge in the Kenyon cells, synergistically activating a Ca++-calmodulin-sensitive adenylate cyclase, which likely leads to the synaptic plasticity underlying the conditioned avoidance of the odour. In Aplysia, the effect of serotonin on the corresponding adenylate cyclase is bi-directionally modulated by Ca++, depending on the relative timing of the two inputs. Using a computational approach, we quantitatively explore this biochemical property of the adenylate cyclase and show that it can generate the effect of event timing on associative learning. We overcome the shortage of behavioural data in Aplysia and biochemical data in Drosophila by combining findings from both systems

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Dopaminergic Regulation of Neuronal Circuits in Prefrontal Cortex

    Get PDF
    Neuromodulators, like dopamine, have considerable influence on the\ud processing capabilities of neural networks. \ud This has for instance been shown in the working memory functions\ud of prefrontal cortex, which may be regulated by altering the\ud dopamine level. Experimental work provides evidence on the biochemical\ud and electrophysiological actions of dopamine receptors, but there are few \ud theories concerning their significance for computational properties \ud (ServanPrintzCohen90,Hasselmo94).\ud We point to experimental data on neuromodulatory regulation of \ud temporal properties of excitatory neurons and depolarization of inhibitory \ud neurons, and suggest computational models employing these effects.\ud Changes in membrane potential may be modelled by the firing threshold,\ud and temporal properties by a parameterization of neuronal responsiveness \ud according to the preceding spike interval.\ud We apply these concepts to two examples using spiking neural networks.\ud In the first case, there is a change in the input synchronization of\ud neuronal groups, which leads to\ud changes in the formation of synchronized neuronal ensembles.\ud In the second case, the threshold\ud of interneurons influences lateral inhibition, and the switch from a \ud winner-take-all network to a parallel feedforward mode of processing.\ud Both concepts are interesting for the modeling of cognitive functions and may\ud have explanatory power for behavioral changes associated with dopamine \ud regulation

    Branch-specific plasticity enables self-organization of nonlinear computation in single neurons

    Get PDF
    It has been conjectured that nonlinear processing in dendritic branches endows individual neurons with the capability to perform complex computational operations that are needed in order to solve for example the binding problem. However, it is not clear how single neurons could acquire such functionality in a self-organized manner, since most theoretical studies of synaptic plasticity and learning concentrate on neuron models without nonlinear dendritic properties. In the meantime, a complex picture of information processing with dendritic spikes and a variety of plasticity mechanisms in single neurons has emerged from experiments. In particular, new experimental data on dendritic branch strength potentiation in rat hippocampus have not yet been incorporated into such models. In this article, we investigate how experimentally observed plasticity mechanisms, such as depolarization-dependent STDP and branch-strength potentiation could be integrated to self-organize nonlinear neural computations with dendritic spikes. We provide a mathematical proof that in a simplified setup these plasticity mechanisms induce a competition between dendritic branches, a novel concept in the analysis of single neuron adaptivity. We show via computer simulations that such dendritic competition enables a single neuron to become member of several neuronal ensembles, and to acquire nonlinear computational capabilities, such as for example the capability to bind multiple input features. Hence our results suggest that nonlinear neural computation may self-organize in single neurons through the interaction of local synaptic and dendritic plasticity mechanisms

    Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

    Get PDF
    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of FILT in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find FILT to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of FILT to be consistent with that of the highly efficient E-learning Chronotron, but with the distinct advantage that FILT is also implementable as an online method for increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and incorporates reviewer comment

    The Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effects

    Full text link
    How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657
    • …
    corecore