4,889 research outputs found

    A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines

    Full text link
    Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN

    Role of hilar mossy cells in the CA3-dentate gyrus network during sharp wave-ripple activity in vitro

    Get PDF
    Der Gyrus dentatus (DG) des Hippokampus wird als Eingangsstation für Informationen aus dem entorhinalen Kortex betrachtet. In das DG-Netzwerk sind zwei exzitatorische Zelltypen eingebettet: Körnerzellen, die Signale von dem entorhinalen Kortex empfangen, und Hilus-Mooszellen (MCs), die Signale von Körnerzellen als auch von feedback-Projektionen von CA3-Pyramidenzellen (PCs) empfangen. Postsynaptische Ziele von MC-Projektionen umfassen DG Körnerzellen und verschiedene Interneurone in der selben und in der kontralateralen Hemisphäre des Gehirns. Die Rolle von MCs während rhythmischer Populationsaktivität, und insbesondere während Sharp-Wave / Ripple-Komplexen (SWRs), ist bisher weitgehend unerforscht. SWRs sind prominente Ereignisse im Hippocampus während des Tiefschlafs (Slow wave sleep) und des ruhigen Wachzustandes, und sie sind an der Gedächtniskonsolidierung beteiligt. In der vorliegenden Arbeit, untersuchen wir mithilfe eines in-vitro-Modells von SWRs, inwieweit Mooszellen an SWRs in CA3 beteiligt sind. Mit CA3-Feldpotential-Ableitungen und gleichzeitigen ‚cell-attached‘ Messungen von einzelnen MCs konnten wir beobachten, dass ein wesentlicher Anteil von MCs (47%) während der SWRs in das aktive neuronale Netzwerk rekrutiert werden. Darüber hinaus fanden wir in MCs SWR-assoziierte synaptische Aktivität, bei denen sowohl die exzitatorischen als auch die inhibitorischen Komponenten phasenkohärent und verzögert zur Ripple Oszillation in CA3 auftreten. Simultane Patch-clamp Messungen von CA3-Pyramidenzellen und MCs zeigten längere exzitatorische und inhibitorische Latenzzeiten bei MCs, was die Hypothese einer von CA3 ausgehenden Feedback-Rekrutierung unterstützt. Unsere Daten zeigen zusätzlich, dass das Verhältnis exzitatorischer zu inhibitorischer Aktivität in MCs höher ist als in CA3-Pyramidenzellen, wodurch die MCs mit höherer Wahrscheinlichkeit während SWRs überschwellig aktiviert werden. Schließlich zeigen wir, dass ein signifikanter Anteil (66%) der getesteten Körnerzellen SWR-assoziierte exzitatorische Signale erhalten, im Vergleich zu MCs zeitlich verzögert, was auf eine indirekte Aktivierung von Körnerzellen durch CA3 PCs über MCs hinweist. Zusammengefasst zeigen unsere Daten die aktive Beteiligung von Mooszellen an SWRs und deuten auf eine funktionelle Bedeutung als Schaltstelle für das CA3- Gyrus dentatus Netzwerk in diesem wichtigen physiologischen Netzwerkzustand hin.The dentate gyrus (DG) is considered as the hippocampal input gate for the information arriving from the entorhinal cortex. Embedded into the DG network are two excitatory cell types –granule cells (GCs), which receive inputs from the entorhinal cortex, and hilar mossy cells (MCs), which receive input from GCs and feedback projections from CA3 pyramidal cells (PCs). The postsynaptic targets of MC projections are the GCs and hilar interneurons in both ipsilateral and contralateral hemispheres of the brain. The role of MCs during rhythmic population activity, and in particular during sharp-wave/ripple complexes (SWRs), has remained largely unexplored. SWRs are prominent field events in the hippocampus during slow wave sleep and quiet wakefulness, and are involved in memory consolidation and future planning. In this study, we sought to understand whether MCs participate during CA3 SWRs using an in vitro model of SWRs. With simultaneous CA3 field potential– and cell-attached recordings from MCs, we observed that a significant fraction of MCs (47%) are recruited into the active neuronal network during SWRs. Moreover, MCs receive pronounced, compound, ripple-associated synaptic input where both excitatory and inhibitory components are phase-coherent with and delayed to the CA3 ripple. Simultaneous patch recordings from CA3 pyramidal neurons and MCs revealed longer excitatory and inhibitory latencies in MCs, supporting a feedback recruitment from CA3. Our data also show that the excitatory to inhibitory charge transfer (E/I) ratio in MCs is higher than in the CA3 PCs, making the MCs more likely to spike during SWRs. Finally, we demonstrate that a significant fraction (66%) of tested GCs receive SWR-associated excitatory inputs that are delayed compared to MCs, indicating an indirect activation of GCs by CA3 PCs via MCs. Together, our data suggest the involvement of mossy cells during SWRs and their importance as a relay for CA3-dentate gyrus networks in this important physiological network state

    Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

    Full text link
    In recent years the field of neuromorphic low-power systems that consume orders of magnitude less power gained significant momentum. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, efficient processing of temporal sequences or variable length-inputs remain difficult. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and-constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights and neural activities to 16 levels, and to limit fan-in to 64 inputs. We find that short synaptic delays are sufficient to implement the dynamical (temporal) aspect of the RNN in the question classification task. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of ~17 uW

    How spiking neurons give rise to a temporal-feature map

    Get PDF
    A temporal-feature map is a topographic neuronal representation of temporal attributes of phenomena or objects that occur in the outside world. We explain the evolution of such maps by means of a spike-based Hebbian learning rule in conjunction with a presynaptically unspecific contribution in that, if a synapse changes, then all other synapses connected to the same axon change by a small fraction as well. The learning equation is solved for the case of an array of Poisson neurons. We discuss the evolution of a temporal-feature map and the synchronization of the single cells’ synaptic structures, in dependence upon the strength of presynaptic unspecific learning. We also give an upper bound for the magnitude of the presynaptic interaction by estimating its impact on the noise level of synaptic growth. Finally, we compare the results with those obtained from a learning equation for nonlinear neurons and show that synaptic structure formation may profit from the nonlinearity

    Spontaneous spiking in an autaptic Hodgkin-Huxley set up

    Full text link
    The effect of intrinsic channel noise is investigated for the dynamic response of a neuronal cell with a delayed feedback loop. The loop is based on the so-called autapse phenomenon in which dendrites establish not only connections to neighboring cells but as well to its own axon. The biophysical modeling is achieved in terms of a stochastic Hodgkin-Huxley model containing such a built in delayed feedback. The fluctuations stem from intrinsic channel noise, being caused by the stochastic nature of the gating dynamics of ion channels. The influence of the delayed stimulus is systematically analyzed with respect to the coupling parameter and the delay time in terms of the interspike interval histograms and the average interspike interval. The delayed feedback manifests itself in the occurrence of bursting and a rich multimodal interspike interval distribution, exhibiting a delay-induced reduction of the spontaneous spiking activity at characteristic frequencies. Moreover, a specific frequency-locking mechanism is detected for the mean interspike interval.Comment: 8 pages, 10 figure
    corecore