5,480 research outputs found
The Roadmap to Realize Memristive Three-Dimensional Neuromorphic Computing System
Neuromorphic computing, an emerging non-von Neumann computing mimicking the physical structure and signal processing technique of mammalian brains, potentially achieves the same level of computing and power efficiencies of mammalian brains. This chapter will discuss the state-of-the-art research trend on neuromorphic computing with memristors as electronic synapses. Furthermore, a novel three-dimensional (3D) neuromorphic computing architecture combining memristor and monolithic 3D integration technology would be introduced; such computing architecture has capabilities to reduce the system power consumption, provide high connectivity, resolve the routing congestion issues, and offer the massively parallel data processing. Moreover, the design methodology of applying the capacitance formed by the through-silicon vias (TSVs) to generate a membrane potential in 3D neuromorphic computing system would be discussed in this chapter
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
A Coupled Spintronics Neuromorphic Approach for High-Performance Reservoir Computing
The rapid development in the field of artificial intelligence has increased the demand for neuromorphic computing hardware and its information-processing capability. A spintronics device is a promising candidate for neuromorphic computing hardware and can be used in extreme environments due to its high resistance to radiation. Improving the information-processing capability of neuromorphic computing is an important challenge for implementation. Herein, a novel neuromorphic computing framework using spintronics devices is proposed. This framework is called coupled spintronics reservoir (CSR) computing and exploits the high-dimensional dynamics of coupled spin-torque oscillators as a computational resource. The relationships among various bifurcations of the CSR and its information-processing capabilities through numerical experiments are analyzed and it is found that certain configurations of the CSR boost the information-processing capability of the spintronics reservoir toward or even beyond the standard level of machine learning networks. The effectiveness of our approach is demonstrated through conventional machine learning benchmarks and edge computing in real physical experiments using pneumatic artificial muscle-based wearables, which assist human operations in various environments. This study significantly advances the availability of neuromorphic computing for practical uses
Applications of Neuromorphic Computing
Neuromorphic computing refers to a form of processing that mirrors the structure and functionality of the human brain. Several potential future applications of neuromorphic computing are outlined in this work. It is a supplement to the Purdue Honors College Visiting Scholars interview with Dr. Gkoupidenis
Graphene oxide based synaptic memristor device for neuromorphic computing
Brain-inspired neuromorphic computing which consist neurons and synapses,
with an ability to perform complex information processing has unfolded a new
paradigm of computing to overcome the von Neumann bottleneck. Electronic
synaptic memristor devices which can compete with the biological synapses are
indeed significant for neuromorphic computing. In this work, we demonstrate our
efforts to develop and realize the graphene oxide (GO) based memristor device
as a synaptic device, which mimic as a biological synapse. Indeed, this device
exhibits the essential synaptic learning behavior including analog memory
characteristics, potentiation and depression. Furthermore,
spike-timing-dependent-plasticity learning rule is mimicked by engineering the
pre- and post-synaptic spikes. In addition, non-volatile properties such as
endurance, retentivity, multilevel switching of the device are explored. These
results suggest that Ag/GO/FTO memristor device would indeed be a potential
candidate for future neuromorphic computing applications.
Keywords: RRAM, Graphene oxide, neuromorphic computing, synaptic device,
potentiation, depressionComment: Nanotechnology (accepted) (IOP publishing
PyCARL: A PyNN Interface for Hardware-Software Co-Simulation of Spiking Neural Network
We present PyCARL, a PyNN-based common Python programming interface for
hardware-software co-simulation of spiking neural network (SNN). Through
PyCARL, we make the following two key contributions. First, we provide an
interface of PyNN to CARLsim, a computationally-efficient, GPU-accelerated and
biophysically-detailed SNN simulator. PyCARL facilitates joint development of
machine learning models and code sharing between CARLsim and PyNN users,
promoting an integrated and larger neuromorphic community. Second, we integrate
cycle-accurate models of state-of-the-art neuromorphic hardware such as
TrueNorth, Loihi, and DynapSE in PyCARL, to accurately model hardware latencies
that delay spikes between communicating neurons and degrade performance. PyCARL
allows users to analyze and optimize the performance difference between
software-only simulation and hardware-software co-simulation of their machine
learning models. We show that system designers can also use PyCARL to perform
design-space exploration early in the product development stage, facilitating
faster time-to-deployment of neuromorphic products. We evaluate the memory
usage and simulation time of PyCARL using functionality tests, synthetic SNNs,
and realistic applications. Our results demonstrate that for large SNNs, PyCARL
does not lead to any significant overhead compared to CARLsim. We also use
PyCARL to analyze these SNNs for a state-of-the-art neuromorphic hardware and
demonstrate a significant performance deviation from software-only simulations.
PyCARL allows to evaluate and minimize such differences early during model
development.Comment: 10 pages, 25 figures. Accepted for publication at International Joint
Conference on Neural Networks (IJCNN) 202
- …