34 research outputs found

    An On-chip Trainable and Clock-less Spiking Neural Network with 1R Memristive Synapses

    Full text link
    Spiking neural networks (SNNs) are being explored in an attempt to mimic brain's capability to learn and recognize at low power. Crossbar architecture with highly scalable Resistive RAM or RRAM array serving as synaptic weights and neuronal drivers in the periphery is an attractive option for SNN. Recognition (akin to reading the synaptic weight) requires small amplitude bias applied across the RRAM to minimize conductance change. Learning (akin to writing or updating the synaptic weight) requires large amplitude bias pulses to produce a conductance change. The contradictory bias amplitude requirement to perform reading and writing simultaneously and asynchronously, akin to biology, is a major challenge. Solutions suggested in the literature rely on time-division-multiplexing of read and write operations based on clocks, or approximations ignoring the reading when coincidental with writing. In this work, we overcome this challenge and present a clock-less approach wherein reading and writing are performed in different frequency domains. This enables learning and recognition simultaneously on an SNN. We validate our scheme in SPICE circuit simulator by translating a two-layered feed-forward Iris classifying SNN to demonstrate software-equivalent performance. The system performance is not adversely affected by a voltage dependence of conductance in realistic RRAMs, despite departing from linearity. Overall, our approach enables direct implementation of biological SNN algorithms in hardware

    Analytical Modeling of Metal Gate Granularity based Threshold Voltage Variability in NWFET

    Full text link
    Estimation of threshold voltage V T variability for NWFETs has been compu- tationally expensive due to lack of analytical models. Variability estimation of NWFET is essential to design the next generation logic circuits. Compared to any other process induced variabilities, Metal Gate Granularity (MGG) is of paramount importance due to its large impact on V T variability. Here, an analytical model is proposed to estimate V T variability caused by MGG. We extend our earlier FinFET based MGG model to a cylindrical NWFET by sat- isfying three additional requirements. First, the gate dielectric layer is replaced by Silicon of electro-statically equivalent thickness using long cylinder approxi- mation; Second, metal grains in NWFETs satisfy periodic boundary condition in azimuthal direction; Third, electrostatics is analytically solved in cylindri- cal polar coordinates with gate boundary condition defined by MGG. We show that quantum effects only shift the mean of the V T distribution without sig- nificant impact on the variability estimated by our electrostatics-based model. The V T distribution estimated by our model matches TCAD simulations. The model quantitatively captures grain size dependence with {\sigma}(V T ) with excellent accuracy (6%error) compared to stochastic 3D TCAD simulations, which is a significant improvement over the state-of- the-art model with fails to produce even a qualitative agreement. The proposed model is 63 times faster compared to commercial TCAD simulations

    A Software-equivalent SNN Hardware using RRAM-array for Asynchronous Real-time Learning

    Full text link
    Spiking Neural Network (SNN) naturally inspires hardware implementation as it is based on biology. For learning, spike time dependent plasticity (STDP) may be implemented using an energy efficient waveform superposition on memristor based synapse. However, system level implementation has three challenges. First, a classic dilemma is that recognition requires current reading for short voltage-spikes which is disturbed by large voltage-waveforms that are simultaneously applied on the same memristor for real-time learning i.e. the simultaneous read-write dilemma. Second, the hardware needs to exactly replicate software implementation for easy adaptation of algorithm to hardware. Third, the devices used in hardware simulations must be realistic. In this paper, we present an approach to address the above concerns. First, the learning and recognition occurs in separate arrays simultaneously in real-time, asynchronously - avoiding non-biomimetic clocking based complex signal management. Second, we show that the hardware emulates software at every stage by comparison of SPICE (circuit-simulator) with MATLAB (mathematical SNN algorithm implementation in software) implementations. As an example, the hardware shows 97.5 per cent accuracy in classification which is equivalent to software for a Fisher-Iris dataset. Third, the STDP is implemented using a model of synaptic device implemented using HfO2 memristor. We show that an increasingly realistic memristor model slightly reduces the hardware performance (85 per cent), which highlights the need to engineer RRAM characteristics specifically for SNN.Comment: Eight pages, ten figures and two table

    A temporally and spatially local spike-based backpropagation algorithm to enable training in hardware

    Full text link
    Spiking Neural Networks (SNNs) have emerged as a hardware efficient architecture for classification tasks. The challenge of spike-based encoding has been the lack of a universal training mechanism performed entirely using spikes. There have been several attempts to adopt the powerful backpropagation (BP) technique used in non-spiking artificial neural networks (ANN): (1) SNNs can be trained by externally computed numerical gradients. (2) A major advancement towards native spike-based learning has been the use of approximate Backpropagation using spike-time dependent plasticity (STDP) with phased forward/backward passes. However, the transfer of information between such phases for gradient and weight update calculation necessitates external memory and computational access. This is a challenge for standard neuromorphic hardware implementations. In this paper, we propose a stochastic SNN based Back-Prop (SSNN-BP) algorithm that utilizes a composite neuron to simultaneously compute the forward pass activations and backward pass gradients explicitly with spikes. Although signed gradient values are a challenge for spike-based representation, we tackle this by splitting the gradient signal into positive and negative streams. We show that our method approaches BP ANN baseline with sufficiently long spike-trains. Finally, we show that the well-performing softmax cross-entropy loss function can be implemented through inhibitory lateral connections enforcing a Winner Take All (WTA) rule. Our SNN with a 2-layer network shows excellent generalization through comparable performance to ANNs with equivalent architecture and regularization parameters on static image datasets like MNIST, Fashion-MNIST, Extended MNIST, and temporally encoded image datasets like Neuromorphic MNIST datasets. Thus, SSNN-BP enables BP compatible with purely spike-based neuromorphic hardware
    corecore