31 research outputs found

    Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

    Full text link
    Multilayered artificial neural networks (ANN) have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy efficient on-chip storage for the synaptic weights. In order to minimize the power consumption of typical digital CMOS implementations of such large-scale networks, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of NNs to small synaptic weight perturbations enables us to scale the operating voltage of the 6TSRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0.5%) in accuracy (22nm predictive technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91% reduction in the memory access power with a 10.41% area overhead, for less than 1% loss in the classification accuracy.Comment: Accepted in Design, Automation and Test in Europe 2016 conference (DATE-2016

    Reward Modulated Spike Timing Dependent Plasticity Based Learning Mechanism in Spiking Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) are one of the recent advances in machine learning that aim to further emulate the computations performed in the human brain. The efficiency of such networks stems from the fact that information is encoded as spikes, which is a paradigm shift from the computing model of the traditional neural networks. Spike Timing Dependent Plasticity (STDP), wherein the synaptic weights interconnecting the neurons are modulated based on a pair of pre- and post-synaptic spikes is widely used to achieve synaptic learning. The learning mechanism is extremely sensitive to the parameters governing the neuron dynamics, the extent of lateral inhibition among the neurons, and the spike frequency adaptation parameters. Hence, we explore a reward modulated learning methodology to further improve the synaptic learning efficiency. In our work, we define a target spiking pattern a priori for each neuron in the network. The primary objective is to cause the actual neuronal spiking pattern to converge to the desired pattern during the training phase. The STDP driven synaptic updates are modulated by a reward metric, which determines the distance between the actual and target spike train. We estimated the reward using the difference between the averaged version of the actual and desired spike train. Reward based semi-supervised learning scheme is implemented on a two layered SNN trained to classify handwritten digits from the MNIST image set. We obtained an accuracy of 73.16% on the testing image set for 100 spiking neurons which helped learning better in case of supervision

    ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing

    Get PDF
    In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks
    corecore