26 research outputs found
A Software-equivalent SNN Hardware using RRAM-array for Asynchronous Real-time Learning
Spiking Neural Network (SNN) naturally inspires hardware implementation as it
is based on biology. For learning, spike time dependent plasticity (STDP) may
be implemented using an energy efficient waveform superposition on memristor
based synapse. However, system level implementation has three challenges.
First, a classic dilemma is that recognition requires current reading for short
voltagespikes which is disturbed by large voltagewaveforms that are
simultaneously applied on the same memristor for realtime learning i.e. the
simultaneous readwrite dilemma. Second, the hardware needs to exactly
replicate software implementation for easy adaptation of algorithm to hardware.
Third, the devices used in hardware simulations must be realistic. In this
paper, we present an approach to address the above concerns. First, the
learning and recognition occurs in separate arrays simultaneously in
realtime, asynchronously avoiding nonbiomimetic clocking based
complex signal management. Second, we show that the hardware emulates software
at every stage by comparison of SPICE (circuitsimulator) with MATLAB
(mathematical SNN algorithm implementation in software) implementations. As an
example, the hardware shows 97.5 per cent accuracy in classification which is
equivalent to software for a FisherIris dataset. Third, the STDP is
implemented using a model of synaptic device implemented using HfO2 memristor.
We show that an increasingly realistic memristor model slightly reduces the
hardware performance (85 per cent), which highlights the need to engineer RRAM
characteristics specifically for SNN.Comment: Eight pages, ten figures and two table
GraphR: Accelerating Graph Processing Using ReRAM
This paper presents GRAPHR, the first ReRAM-based graph processing
accelerator. GRAPHR follows the principle of near-data processing and explores
the opportunity of performing massive parallel analog operations with low
hardware and energy cost. The analog computation is suit- able for graph
processing because: 1) The algorithms are iterative and could inherently
tolerate the imprecision; 2) Both probability calculation (e.g., PageRank and
Collaborative Filtering) and typical graph algorithms involving integers (e.g.,
BFS/SSSP) are resilient to errors. The key insight of GRAPHR is that if a
vertex program of a graph algorithm can be expressed in sparse matrix vector
multiplication (SpMV), it can be efficiently performed by ReRAM crossbar. We
show that this assumption is generally true for a large set of graph
algorithms. GRAPHR is a novel accelerator architecture consisting of two
components: memory ReRAM and graph engine (GE). The core graph computations are
performed in sparse matrix format in GEs (ReRAM crossbars). The
vector/matrix-based graph computation is not new, but ReRAM offers the unique
opportunity to realize the massive parallelism with unprecedented energy
efficiency and low hardware cost. With small subgraphs processed by GEs, the
gain of performing parallel operations overshadows the wastes due to sparsity.
The experiment results show that GRAPHR achieves a 16.01x (up to 132.67x)
speedup and a 33.82x energy saving on geometric mean compared to a CPU baseline
system. Com- pared to GPU, GRAPHR achieves 1.69x to 2.19x speedup and consumes
4.77x to 8.91x less energy. GRAPHR gains a speedup of 1.16x to 4.12x, and is
3.67x to 10.96x more energy efficiency compared to PIM-based architecture.Comment: Accepted to HPCA 201