334 research outputs found

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Spike-based VITE control with Dynamic Vision Sensor applied to an Arm Robot.

    Get PDF
    Spike-based motor control is very important in the field of robotics and also for the neuromorphic engineering community to bridge the gap between sensing / processing devices and motor control without losing the spike philosophy that enhances speed response and reduces power consumption. This paper shows an accurate neuro-inspired spike-based system composed of a DVS retina, a visual processing system that detects and tracks objects, and a SVITE motor control, where everything follows the spike-based philosophy. The control system is a spike version of the neuroinspired open loop VITE control algorithm implemented in a couple of FPGA boards: the first one runs the algorithm and the second one drives the motors with spikes. The robotic platform is a low cost arm with four degrees of freedom.Ministerio de Ciencia e Innovación TEC2009-10639-C04-02/01Ministerio de Economía y Competitividad TEC2012-37868-C04-02/0

    A scalable multi-core architecture with heterogeneous memory structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs)

    Full text link
    Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multi-core neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.Comment: 17 pages, 14 figure

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Neuromorphic silicon neuron circuits

    Get PDF
    23 páginas, 21 figuras, 2 tablas.-- et al.Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.This work was supported by the EU ERC grant 257219 (neuroP), the EU ICT FP7 grants 231467 (eMorph), 216777 (NABAB), 231168 (SCANDLE), 15879 (FACETS), by the Swiss National Science Foundation grant 119973 (SoundRec), by the UK EPSRC grant no. EP/C010841/1, by the Spanish grants (with support from the European Regional Development Fund) TEC2006-11730-C03-01 (SAMANTA2), TEC2009-10639-C04-01 (VULCANO) Andalusian grant num. P06TIC01417 (Brain System), and by the Australian Research Council grants num. DP0343654 and num. DP0881219.Peer Reviewe

    High-Performance and Energy-Efficient Leaky Integrate-and-Fire Neuron and Spike Timing-Dependent Plasticity Circuits in 7nm FinFET Technology

    Get PDF
    In designing neuromorphic circuits and systems, developing compact and energy-efficient neuron and synapse circuits is essential for high-performance on-chip neural architectures. Toward that end, this work utilizes the advanced low-power and compact 7nm FinFET technology to design leaky integrate-and-fire (LIF) neuron and spike-timing-dependent plasticity (STDP) circuits. In the proposed STDP circuit, only six FinFETs and three small capacitors (two 10fF and 20fF) have been utilized to realize STDP learning. Moreover, 12 transistors and two capacitors (20fF) have been employed for designing the LIF neuron circuit. The evaluation results demonstrate that besides 60% area saving, the proposed STDP circuit achieves 68% improvement in total average power consumption and 43% lower energy dissipation compared to previous works. The proposed LIF neuron circuit demonstrates 34% area saving, 46% power, and 40% energy saving compared to its counterparts. The neuron can also tune the firing frequency within 5MHz-330MHz using an external control voltage. These results emphasize the potential of the proposed neuron and STDP learning circuits for compact and energy-efficient neuromorphic computing systems
    corecore