16 research outputs found
Multiple-Step Quantized Triplet STDP Implemented with Memristive Synapse
As an extension of the pairwise spike-timingdependent plasticity (STDP)
learning rule, the triplet STDP is provided with greater capability in
characterizing the synaptic changes in the biological neural cell. In this
work, a novel mixedsignal circuit scheme, called multiple-step quantized
triplet STDP, is designed to provide a precise and flexible implementation of
coactivation triplet STDP learning rule in memristive synapse spiking neural
network. The robustness of the circuit is greatly improved through the
utilization of pulse-width encoded weight modulation signals. The circuit
performance is studied through the simulations which are carried out in MATLAB
Simulink & Simscape, and assessment is given by comparing the results of
circuits with the algorithmic approaches.Comment: 5 pages, 10 figure
An On-chip Trainable and Clock-less Spiking Neural Network with 1R Memristive Synapses
Spiking neural networks (SNNs) are being explored in an attempt to mimic
brain's capability to learn and recognize at low power. Crossbar architecture
with highly scalable Resistive RAM or RRAM array serving as synaptic weights
and neuronal drivers in the periphery is an attractive option for SNN.
Recognition (akin to reading the synaptic weight) requires small amplitude bias
applied across the RRAM to minimize conductance change. Learning (akin to
writing or updating the synaptic weight) requires large amplitude bias pulses
to produce a conductance change. The contradictory bias amplitude requirement
to perform reading and writing simultaneously and asynchronously, akin to
biology, is a major challenge. Solutions suggested in the literature rely on
time-division-multiplexing of read and write operations based on clocks, or
approximations ignoring the reading when coincidental with writing. In this
work, we overcome this challenge and present a clock-less approach wherein
reading and writing are performed in different frequency domains. This enables
learning and recognition simultaneously on an SNN. We validate our scheme in
SPICE circuit simulator by translating a two-layered feed-forward Iris
classifying SNN to demonstrate software-equivalent performance. The system
performance is not adversely affected by a voltage dependence of conductance in
realistic RRAMs, despite departing from linearity. Overall, our approach
enables direct implementation of biological SNN algorithms in hardware
A Software-equivalent SNN Hardware using RRAM-array for Asynchronous Real-time Learning
Spiking Neural Network (SNN) naturally inspires hardware implementation as it
is based on biology. For learning, spike time dependent plasticity (STDP) may
be implemented using an energy efficient waveform superposition on memristor
based synapse. However, system level implementation has three challenges.
First, a classic dilemma is that recognition requires current reading for short
voltagespikes which is disturbed by large voltagewaveforms that are
simultaneously applied on the same memristor for realtime learning i.e. the
simultaneous readwrite dilemma. Second, the hardware needs to exactly
replicate software implementation for easy adaptation of algorithm to hardware.
Third, the devices used in hardware simulations must be realistic. In this
paper, we present an approach to address the above concerns. First, the
learning and recognition occurs in separate arrays simultaneously in
realtime, asynchronously avoiding nonbiomimetic clocking based
complex signal management. Second, we show that the hardware emulates software
at every stage by comparison of SPICE (circuitsimulator) with MATLAB
(mathematical SNN algorithm implementation in software) implementations. As an
example, the hardware shows 97.5 per cent accuracy in classification which is
equivalent to software for a FisherIris dataset. Third, the STDP is
implemented using a model of synaptic device implemented using HfO2 memristor.
We show that an increasingly realistic memristor model slightly reduces the
hardware performance (85 per cent), which highlights the need to engineer RRAM
characteristics specifically for SNN.Comment: Eight pages, ten figures and two table
Memcapacitive Devices in Logic and Crossbar Applications
Over the last decade, memristive devices have been widely adopted in
computing for various conventional and unconventional applications. While the
integration density, memory property, and nonlinear characteristics have many
benefits, reducing the energy consumption is limited by the resistive nature of
the devices. Memcapacitors would address that limitation while still having all
the benefits of memristors. Recent work has shown that with adjusted parameters
during the fabrication process, a metal-oxide device can indeed exhibit a
memcapacitive behavior. We introduce novel memcapacitive logic gates and
memcapacitive crossbar classifiers as a proof of concept that such applications
can outperform memristor-based architectures. The results illustrate that,
compared to memristive logic gates, our memcapacitive gates consume about 7x
less power. The memcapacitive crossbar classifier achieves similar
classification performance but reduces the power consumption by a factor of
about 1,500x for the MNIST dataset and a factor of about 1,000x for the
CIFAR-10 dataset compared to a memristive crossbar. Our simulation results
demonstrate that memcapacitive devices have great potential for both Boolean
logic and analog low-power applications
An Adaptive Memory Management Strategy Towards Energy Efficient Machine Inference in Event-Driven Neuromorphic Accelerators
Spiking neural networks are viable alternatives to classical neural networks for edge processing in low-power embedded and IoT devices. To reap their benefits, neuromorphic network accelerators that tend to support deep networks still have to expend great effort in fetching synaptic states from a large remote memory. Since local computation in these networks is event-driven, memory becomes the major part of the system’s energy consumption. In this paper, we explore various opportunities of data reuse that can help mitigate the redundant traffic for retrieval of neuron meta-data and post-synaptic weights. We describe CyNAPSE, a baseline neural processing unit and its accompanying software simulation as a general template for exploration on various levels. We then investigate the memory access patterns of three spiking neural network benchmarks that have significantly different topology and activity. With a detailed study of locality in memory traffic, we establish the factors that hinder conventional cache management philosophies from working efficiently for these applications. To that end, we propose and evaluate a domain-specific management policy that takes advantage of the forward visibility of events in a queue-based event-driven simulation framework. Subsequently, we propose network-adaptive enhancements to make it robust to network variations. As a result, we achieve 13-44% reduction in system power consumption and a 8-23% improvement over conventional replacement policies
Recommended from our members