256 research outputs found

    Demonstrating Advantages of Neuromorphic Computation: A Pilot Study

    Get PDF
    Neuromorphic devices represent an attempt to mimic aspects of the brain's architecture and dynamics with the aim of replicating its hallmark functional capabilities in terms of computational power, robust learning and energy efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic system to implement a proof-of-concept demonstration of reward-modulated spike-timing-dependent plasticity in a spiking network that learns to play the Pong video game by smooth pursuit. This system combines an electronic mixed-signal substrate for emulating neuron and synapse dynamics with an embedded digital processor for on-chip learning, which in this work also serves to simulate the virtual environment and learning agent. The analog emulation of neuronal membrane dynamics enables a 1000-fold acceleration with respect to biological real-time, with the entire chip operating on a power budget of 57mW. Compared to an equivalent simulation using state-of-the-art software, the on-chip emulation is at least one order of magnitude faster and three orders of magnitude more energy-efficient. We demonstrate how on-chip learning can mitigate the effects of fixed-pattern noise, which is unavoidable in analog substrates, while making use of temporal variability for action exploration. Learning compensates imperfections of the physical substrate, as manifested in neuronal parameter variability, by adapting synaptic weights to match respective excitability of individual neurons.Comment: Added measurements with noise in NEST simulation, add notice about journal publication. Frontiers in Neuromorphic Engineering (2019

    The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study

    Get PDF
    High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...Comment: 20 pages, 10 figures, supplement

    High-Performance and Energy-Efficient Leaky Integrate-and-Fire Neuron and Spike Timing-Dependent Plasticity Circuits in 7nm FinFET Technology

    Get PDF
    In designing neuromorphic circuits and systems, developing compact and energy-efficient neuron and synapse circuits is essential for high-performance on-chip neural architectures. Toward that end, this work utilizes the advanced low-power and compact 7nm FinFET technology to design leaky integrate-and-fire (LIF) neuron and spike-timing-dependent plasticity (STDP) circuits. In the proposed STDP circuit, only six FinFETs and three small capacitors (two 10fF and 20fF) have been utilized to realize STDP learning. Moreover, 12 transistors and two capacitors (20fF) have been employed for designing the LIF neuron circuit. The evaluation results demonstrate that besides 60% area saving, the proposed STDP circuit achieves 68% improvement in total average power consumption and 43% lower energy dissipation compared to previous works. The proposed LIF neuron circuit demonstrates 34% area saving, 46% power, and 40% energy saving compared to its counterparts. The neuron can also tune the firing frequency within 5MHz-330MHz using an external control voltage. These results emphasize the potential of the proposed neuron and STDP learning circuits for compact and energy-efficient neuromorphic computing systems

    Cryogenic Neuromorphic Hardware

    Full text link
    The revolution in artificial intelligence (AI) brings up an enormous storage and data processing requirement. Large power consumption and hardware overhead have become the main challenges for building next-generation AI hardware. To mitigate this, Neuromorphic computing has drawn immense attention due to its excellent capability for data processing with very low power consumption. While relentless research has been underway for years to minimize the power consumption in neuromorphic hardware, we are still a long way off from reaching the energy efficiency of the human brain. Furthermore, design complexity and process variation hinder the large-scale implementation of current neuromorphic platforms. Recently, the concept of implementing neuromorphic computing systems in cryogenic temperature has garnered intense interest thanks to their excellent speed and power metric. Several cryogenic devices can be engineered to work as neuromorphic primitives with ultra-low demand for power. Here we comprehensively review the cryogenic neuromorphic hardware. We classify the existing cryogenic neuromorphic hardware into several hierarchical categories and sketch a comparative analysis based on key performance metrics. Our analysis concisely describes the operation of the associated circuit topology and outlines the advantages and challenges encountered by the state-of-the-art technology platforms. Finally, we provide insights to circumvent these challenges for the future progression of research

    Mixed-Signal Neural Network Implementation with Programmable Neuron

    Get PDF
    This thesis introduces implementation of mixed-signal building blocks of an artificial neural network; namely the neuron and the synaptic multiplier. This thesis, also, investigates the nonlinear dynamic behavior of a single artificial neuron and presents a Distributed Arithmetic (DA)-based Finite Impulse Response (FIR) filter. All the introduced structures are designed and custom laid out

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
    • …
    corecore