492 research outputs found
Highly Scalable Neuromorphic Hardware with 1-bit Stochastic nano-Synapses
Thermodynamic-driven filament formation in redox-based resistive memory and
the impact of thermal fluctuations on switching probability of emerging
magnetic switches are probabilistic phenomena in nature, and thus, processes of
binary switching in these nonvolatile memories are stochastic and vary from
switching cycle-to-switching cycle, in the same device, and from
device-to-device, hence, they provide a rich in-situ spatiotemporal stochastic
characteristic. This work presents a highly scalable neuromorphic hardware
based on crossbar array of 1-bit resistive crosspoints as distributed
stochastic synapses. The network shows a robust performance in emulating
selectivity of synaptic potentials in neurons of primary visual cortex to the
orientation of a visual image. The proposed model could be configured to accept
a wide range of nanodevices.Comment: 9 pages, 6 figure
Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing
Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system.
This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea.
The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems
Stochastic learning in oxide binary synaptic device for neuromorphic computing
abstract: Hardware implementation of neuromorphic computing is attractive as a computing paradigm beyond the conventional digital computing. In this work, we show that the SET (off-to-on) transition of metal oxide resistive switching memory becomes probabilistic under a weak programming condition. The switching variability of the binary synaptic device implements a stochastic learning rule. Such stochastic SET transition was statistically measured and modeled for a simulation of a winner-take-all network for competitive learning. The simulation illustrates that with such stochastic learning, the orientation classification function of input patterns can be effectively realized. The system performance metrics were compared between the conventional approach using the analog synapse and the approach in this work that employs the binary synapse utilizing the stochastic learning. The feasibility of using binary synapse in the neurormorphic computing may relax the constraints to engineer continuous multilevel intermediate states and widens the material choice for the synaptic device design.View the article as published at http://journal.frontiersin.org/article/10.3389/fnins.2013.00186/ful
Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition
A neuromorphic chip that combines CMOS analog spiking neurons and memristive
synapses offers a promising solution to brain-inspired computing, as it can
provide massive neural network parallelism and density. Previous hybrid analog
CMOS-memristor approaches required extensive CMOS circuitry for training, and
thus eliminated most of the density advantages gained by the adoption of
memristor synapses. Further, they used different waveforms for pre and
post-synaptic spikes that added undesirable circuit overhead. Here we describe
a hardware architecture that can feature a large number of memristor synapses
to learn real-world patterns. We present a versatile CMOS neuron that combines
integrate-and-fire behavior, drives passive memristors and implements
competitive learning in a compact circuit module, and enables in-situ
plasticity in the memristor synapses. We demonstrate handwritten-digits
recognition using the proposed architecture using transistor-level circuit
simulations. As the described neuromorphic architecture is homogeneous, it
realizes a fundamental building block for large-scale energy-efficient
brain-inspired silicon chips that could lead to next-generation cognitive
computing.Comment: This is a preprint of an article accepted for publication in IEEE
Journal on Emerging and Selected Topics in Circuits and Systems, vol 5, no.
2, June 201
Neuromorphic computing using non-volatile memory
Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first review recent advances in the application of NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and ‘Memcomputing’. In SNNs, NVM synaptic connections are updated by a local learning rule such as spike-timing-dependent-plasticity, a computational approach directly inspired by biology. For DNNs, NVM arrays can represent matrices of synaptic weights, implementing the matrix–vector multiplication needed for algorithms such as backpropagation in an analog yet massively-parallel fashion. This approach could provide significant improvements in power and speed compared to GPU-based DNN training, for applications of commercial significance. We then survey recent research in which different types of NVM devices – including phase change memory, conductive-bridging RAM, filamentary and non-filamentary RRAM, and other NVMs – have been proposed, either as a synapse or as a neuron, for use within a neuromorphic computing application. The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.11Yscopu
Accelerate & Actualize: Can 2D Materials Bridge the Gap Between Neuromorphic Hardware and the Human Brain?
Two-dimensional (2D) materials present an exciting opportunity for devices
and systems beyond the von Neumann computing architecture paradigm due to their
diversity of electronic structure, physical properties, and atomically-thin,
van der Waals structures that enable ease of integration with conventional
electronic materials and silicon-based hardware. All major classes of
non-volatile memory (NVM) devices have been demonstrated using 2D materials,
including their operation as synaptic devices for applications in neuromorphic
computing hardware. Their atomically-thin structure, superior physical
properties, i.e., mechanical strength, electrical and thermal conductivity, as
well as gate-tunable electronic properties provide performance advantages and
novel functionality in NVM devices and systems. However, device performance and
variability as compared to incumbent materials and technology remain major
concerns for real applications. Ultimately, the progress of 2D materials as a
novel class of electronic materials and specifically their application in the
area of neuromorphic electronics will depend on their scalable synthesis in
thin-film form with desired crystal quality, defect density, and phase purity.Comment: Neuromorphic Computing, 2D Materials, Heterostructures, Emerging
Memory Devices, Resistive, Phase-Change, Ferroelectric, Ferromagnetic,
Crossbar Array, Machine Learning, Deep Learning, Spiking Neural Network
- …