617 research outputs found
A differential memristive synapse circuit for on-line learning in neuromorphic computing systems
Spike-based learning with memristive devices in neuromorphic computing
architectures typically uses learning circuits that require overlapping pulses
from pre- and post-synaptic nodes. This imposes severe constraints on the
length of the pulses transmitted in the network, and on the network's
throughput. Furthermore, most of these circuits do not decouple the currents
flowing through memristive devices from the one stimulating the target neuron.
This can be a problem when using devices with high conductance values, because
of the resulting large currents. In this paper we propose a novel circuit that
decouples the current produced by the memristive device from the one used to
stimulate the post-synaptic neuron, by using a novel differential scheme based
on the Gilbert normalizer circuit. We show how this circuit is useful for
reducing the effect of variability in the memristive devices, and how it is
ideally suited for spike-based learning mechanisms that do not require
overlapping pre- and post-synaptic pulses. We demonstrate the features of the
proposed synapse circuit with SPICE simulations, and validate its learning
properties with high-level behavioral network simulations which use a
stochastic gradient descent learning rule in two classification tasks.Comment: 18 Pages main text, 9 pages of supplementary text, 19 figures.
Patente
Highly Scalable Neuromorphic Hardware with 1-bit Stochastic nano-Synapses
Thermodynamic-driven filament formation in redox-based resistive memory and
the impact of thermal fluctuations on switching probability of emerging
magnetic switches are probabilistic phenomena in nature, and thus, processes of
binary switching in these nonvolatile memories are stochastic and vary from
switching cycle-to-switching cycle, in the same device, and from
device-to-device, hence, they provide a rich in-situ spatiotemporal stochastic
characteristic. This work presents a highly scalable neuromorphic hardware
based on crossbar array of 1-bit resistive crosspoints as distributed
stochastic synapses. The network shows a robust performance in emulating
selectivity of synaptic potentials in neurons of primary visual cortex to the
orientation of a visual image. The proposed model could be configured to accept
a wide range of nanodevices.Comment: 9 pages, 6 figure
Pavlov's dog associative learning demonstrated on synaptic-like organic transistors
In this letter, we present an original demonstration of an associative
learning neural network inspired by the famous Pavlov's dogs experiment. A
single nanoparticle organic memory field effect transistor (NOMFET) is used to
implement each synapse. We show how the physical properties of this dynamic
memristive device can be used to perform low power write operations for the
learning and implement short-term association using temporal coding and spike
timing dependent plasticity based learning. An electronic circuit was built to
validate the proposed learning scheme with packaged devices, with good
reproducibility despite the complex synaptic-like dynamic of the NOMFET in
pulse regime
Memristors for the Curious Outsiders
We present both an overview and a perspective of recent experimental advances
and proposed new approaches to performing computation using memristors. A
memristor is a 2-terminal passive component with a dynamic resistance depending
on an internal parameter. We provide an brief historical introduction, as well
as an overview over the physical mechanism that lead to memristive behavior.
This review is meant to guide nonpractitioners in the field of memristive
circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page
Algorithm and Hardware Co-design for Learning On-a-chip
abstract: Machine learning technology has made a lot of incredible achievements in recent years. It has rivalled or exceeded human performance in many intellectual tasks including image recognition, face detection and the Go game. Many machine learning algorithms require huge amount of computation such as in multiplication of large matrices. As silicon technology has scaled to sub-14nm regime, simply scaling down the device cannot provide enough speed-up any more. New device technologies and system architectures are needed to improve the computing capacity. Designing specific hardware for machine learning is highly in demand. Efforts need to be made on a joint design and optimization of both hardware and algorithm.
For machine learning acceleration, traditional SRAM and DRAM based system suffer from low capacity, high latency, and high standby power. Instead, emerging memories, such as Phase Change Random Access Memory (PRAM), Spin-Transfer Torque Magnetic Random Access Memory (STT-MRAM), and Resistive Random Access Memory (RRAM), are promising candidates providing low standby power, high data density, fast access and excellent scalability. This dissertation proposes a hierarchical memory modeling framework and models PRAM and STT-MRAM in four different levels of abstraction. With the proposed models, various simulations are conducted to investigate the performance, optimization, variability, reliability, and scalability.
Emerging memory devices such as RRAM can work as a 2-D crosspoint array to speed up the multiplication and accumulation in machine learning algorithms. This dissertation proposes a new parallel programming scheme to achieve in-memory learning with RRAM crosspoint array. The programming circuitry is designed and simulated in TSMC 65nm technology showing 900X speedup for the dictionary learning task compared to the CPU performance.
From the algorithm perspective, inspired by the high accuracy and low power of the brain, this dissertation proposes a bio-plausible feedforward inhibition spiking neural network with Spike-Rate-Dependent-Plasticity (SRDP) learning rule. It achieves more than 95% accuracy on the MNIST dataset, which is comparable to the sparse coding algorithm, but requires far fewer number of computations. The role of inhibition in this network is systematically studied and shown to improve the hardware efficiency in learning.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
- …