38 research outputs found

    Dynamic Power Management for Neuromorphic Many-Core Systems

    Full text link
    This work presents a dynamic power management architecture for neuromorphic many core systems such as SpiNNaker. A fast dynamic voltage and frequency scaling (DVFS) technique is presented which allows the processing elements (PE) to change their supply voltage and clock frequency individually and autonomously within less than 100 ns. This is employed by the neuromorphic simulation software flow, which defines the performance level (PL) of the PE based on the actual workload within each simulation cycle. A test chip in 28 nm SLP CMOS technology has been implemented. It includes 4 PEs which can be scaled from 0.7 V to 1.0 V with frequencies from 125 MHz to 500 MHz at three distinct PLs. By measurement of three neuromorphic benchmarks it is shown that the total PE power consumption can be reduced by 75%, with 80% baseline power reduction and a 50% reduction of energy per neuron and synapse computation, all while maintaining temporary peak system performance to achieve biological real-time operation of the system. A numerical model of this power management model is derived which allows DVFS architecture exploration for neuromorphics. The proposed technique is to be used for the second generation SpiNNaker neuromorphic many core system

    MorphIC: A 65-nm 738k-Synapse/mm2^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

    Full text link
    Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient spiking neural networks still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this work, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86mm2^2 in 65nm CMOS, achieving a high density of 738k synapses/mm2^2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.Comment: This document is the paper as accepted for publication in the IEEE Transactions on Biomedical Circuits and Systems journal (2019), the fully-edited paper is available at https://ieeexplore.ieee.org/document/876400

    Energy-Efficient Ferroelectric Field-Effect Transistor-Based Oscillators for Neuromorphic System Design

    Full text link
    Neuromorphic or bioinspired computational platforms, as an alternative for von-Neumann structures, have benefitted from the excellent features of emerging technologies in order to emulate the behavior of the biological brain in an accurate and energy-efficient way. Integrability with CMOS technology and low power consumption make ferroelectric field-effect transistor (FEFET) an attractive candidate to perform such paradigms, particularly for image processing. In this article, we use the FEFET device to make energy-efficient oscillatory neurons as the main parts of neural networks for image processing applications, especially for edge detection. Based on our simulation results, we estimated a significant energy efficiency compared with other technologies, which shows roughly 5-120\times reduction, depending on the design

    Neuromorphic object localization using resistive memories and ultrasonic transducers

    Full text link
    Real-world sensory-processing applications require compact, low-latency, and low-power computing systems. Enabled by their in-memory event-driven computing abilities, hybrid memristive-Complementary Metal-Oxide Semiconductor neuromorphic architectures provide an ideal hardware substrate for such tasks. To demonstrate the full potential of such systems, we propose and experimentally demonstrate an end-to-end sensory processing solution for a real-world object localization application. Drawing inspiration from the barn owl’s neuroanatomy, we developed a bio-inspired, event-driven object localization system that couples state-of-the-art piezoelectric micromachined ultrasound transducer sensors to a neuromorphic resistive memories-based computational map. We present measurement results from the fabricated system comprising resistive memories-based coincidence detectors, delay line circuits, and a full-custom ultrasound sensor. We use these experimental results to calibrate our system-level simulations. These simulations are then used to estimate the angular resolution and energy efficiency of the object localization model. The results reveal the potential of our approach, evaluated in orders of magnitude greater energy efficiency than a microcontroller performing the same task

    Modern Semiconductor Technologies for Neuromorphic Hardware

    Get PDF
    Neuromorphic hardware is a promising tool for neuroscience and technological applications. This thesis addresses the question to what extent such systems can benefit from advances in CMOS scaling using the existing BrainScales Hardware System as a reference. A 65 nm process technology was selected and basic characteristics were evaluated using prototype chips. A system providing a large number of programmable voltage and current sources, based on capacitive storage cells, was developed. A novel scheme for refreshing the cells is presented. This system has been characterized in silicon. Two components required in a concept for synapse implementation, consisting of primarily digital circuits, were developed and tested in a prototype chip. One is an orthogonal dual-port SRAM with a specialized structure where every 8 bit word stored in the memory can be accessed by a single operation from either port. The second is an 8 bit current DAC which is used for generating postsynaptic events. Finally the analog neuron implementation from the existing system was transfered to the 65 nm process technology using thick-oxide transistors. Simulations suggest that comparable performance can be achieved. In conclusion, modern process technologies will contribute to successful realization of large-scale neuromorphic hardware systems
    corecore