219 research outputs found

    Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

    Full text link
    Multilayered artificial neural networks (ANN) have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy efficient on-chip storage for the synaptic weights. In order to minimize the power consumption of typical digital CMOS implementations of such large-scale networks, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of NNs to small synaptic weight perturbations enables us to scale the operating voltage of the 6TSRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0.5%) in accuracy (22nm predictive technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91% reduction in the memory access power with a 10.41% area overhead, for less than 1% loss in the classification accuracy.Comment: Accepted in Design, Automation and Test in Europe 2016 conference (DATE-2016

    Reward Modulated Spike Timing Dependent Plasticity Based Learning Mechanism in Spiking Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) are one of the recent advances in machine learning that aim to further emulate the computations performed in the human brain. The efficiency of such networks stems from the fact that information is encoded as spikes, which is a paradigm shift from the computing model of the traditional neural networks. Spike Timing Dependent Plasticity (STDP), wherein the synaptic weights interconnecting the neurons are modulated based on a pair of pre- and post-synaptic spikes is widely used to achieve synaptic learning. The learning mechanism is extremely sensitive to the parameters governing the neuron dynamics, the extent of lateral inhibition among the neurons, and the spike frequency adaptation parameters. Hence, we explore a reward modulated learning methodology to further improve the synaptic learning efficiency. In our work, we define a target spiking pattern a priori for each neuron in the network. The primary objective is to cause the actual neuronal spiking pattern to converge to the desired pattern during the training phase. The STDP driven synaptic updates are modulated by a reward metric, which determines the distance between the actual and target spike train. We estimated the reward using the difference between the averaged version of the actual and desired spike train. Reward based semi-supervised learning scheme is implemented on a two layered SNN trained to classify handwritten digits from the MNIST image set. We obtained an accuracy of 73.16% on the testing image set for 100 spiking neurons which helped learning better in case of supervision

    <i>‘All That Glitters Is Not Gold'</i>:High-Resolution Crystal Structures of Ligand-Protein Complexes Need Not Always Represent Confident Binding Poses

    Get PDF
    Our understanding of the structure–function relationships of biomolecules and thereby applying it to drug discovery programs are substantially dependent on the availability of the structural information of ligand–protein complexes. However, the correct interpretation of the electron density of a small molecule bound to a crystal structure of a macromolecule is not trivial. Our analysis involving quality assessment of ~0.28 million small molecule–protein binding site pairs derived from crystal structures corresponding to ~66,000 PDB entries indicates that the majority (65%) of the pairs might need little (54%) or no (11%) attention. Out of the remaining 35% of pairs that need attention, 11% of the pairs (including structures with high/moderate resolution) pose serious concerns. Unfortunately, most users of crystal structures lack the training to evaluate the quality of a crystal structure against its experimental data and, in general, rely on the resolution as a ‘gold standard’ quality metric. Our work aims to sensitize the non-crystallographers that resolution, which is a global quality metric, need not be an accurate indicator of local structural quality. In this article, we demonstrate the use of several freely available tools that quantify local structural quality and are easy to use from a non-crystallographer’s perspective. We further propose a few solutions for consideration by the scientific community to promote quality research in structural biology and applied areas

    Turbulence closure with small, local neural networks: Forced two-dimensional and β\beta-plane flows

    Full text link
    We parameterize sub-grid scale (SGS) fluxes in sinusoidally forced two-dimensional turbulence on the β\beta-plane at high Reynolds numbers (Re∼\sim25000) using simple 2-layer Convolutional Neural Networks (CNN) having only O(1000)parameters, two orders of magnitude smaller than recent studies employing deeper CNNs with 8-10 layers; we obtain stable, accurate, and long-term online or a posteriori solutions at 16X downscaling factors. Our methodology significantly improves training efficiency and speed of online Large Eddy Simulations (LES) runs, while offering insights into the physics of closure in such turbulent flows. Our approach benefits from extensive hyperparameter searching in learning rate and weight decay coefficient space, as well as the use of cyclical learning rate annealing, which leads to more robust and accurate online solutions compared to fixed learning rates. Our CNNs use either the coarse velocity or the vorticity and strain fields as inputs, and output the two components of the deviatoric stress tensor. We minimize a loss between the SGS vorticity flux divergence (computed from the high-resolution solver) and that obtained from the CNN-modeled deviatoric stress tensor, without requiring energy or enstrophy preserving constraints. The success of shallow CNNs in accurately parameterizing this class of turbulent flows implies that the SGS stresses have a weak non-local dependence on coarse fields; it also aligns with our physical conception that small-scales are locally controlled by larger scales such as vortices and their strained filaments. Furthermore, 2-layer CNN-parameterizations are more likely to be interpretable and generalizable because of their intrinsic low dimensionality.Comment: 27 pages, 13 figure

    ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing

    Get PDF
    In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers &gt;20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks
    • …
    corecore