13 research outputs found

    Inherent Weight Normalization in Stochastic Neural Networks

    Get PDF
    Multiplicative stochasticity such as Dropout improves the robustness and generalizability of deep neural networks. Here, we further demonstrate that always-on multiplicative stochasticity combined with simple threshold neurons are sufficient operations for deep neural networks. We call such models Neural Sampling Machines (NSM). We find that the probability of activation of the NSM exhibits a self-normalizing property that mirrors Weight Normalization, a previously studied mechanism that fulfills many of the features of Batch Normalization in an online fashion. The normalization of activities during training speeds up convergence by preventing internal covariate shift caused by changes in the input distribution. The always-on stochasticity of the NSM confers the following advantages: the network is identical in the inference and learning phases, making the NSM suitable for online learning, it can exploit stochasticity inherent to a physical substrate such as analog non-volatile memories for in-memory computing, and it is suitable for Monte Carlo sampling, while requiring almost exclusively addition and comparison operations. We demonstrate NSMs on standard classification benchmarks (MNIST and CIFAR) and event-based classification benchmarks (N-MNIST and DVS Gestures). Our results show that NSMs perform comparably or better than conventional artificial neural networks with the same architecture

    A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications

    Get PDF
    Inspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algorithmic optimization that improves energy efficiency of online learning with SNNs on emerging non-volatile memory (eNVM) devices. We develop a pruning method for SNNs by exploiting the output firing characteristics of neurons. Our pruning method can be applied during network training, which is different from previous approaches in the literature that employ pruning on already-trained networks. This approach prevents unnecessary updates of network parameters during training. This algorithmic optimization can complement the energy efficiency of eNVM technology, which offers a unique in-memory computing platform for the parallelization of neural network operations. Our SNN maintains ~90% classification accuracy on the MNIST dataset with up to ~75% pruning, significantly reducing the number of weight updates. The SNN and pruning scheme developed in this work can pave the way toward applications of eNVM based neuro-inspired systems for energy efficient online learning in low power applications

    Reduced Footprint Probabilistic Inference Networks Using Novel Hybrid SHE-MTJ/CMOS Based Majority Gate

    Get PDF
    In recent years, innovations in machine learning using artificial neural networks (ANN) have significantly increased and led to various applications like image recognition, text classification, machine translation, sequence recognition, etc. Earlier, research was focused on software-based DBNs, which are implemented on conventional von-Neumann architectures that provided flexibility but had few limitations. Recent studies have implemented hardware-based designs like FPGA-based, CMOS based, RRAM-based, and MRAM-based designs to overcome these limitations. Hybrid CMOS-MTJ-based RBMs provided significant area and energy improvements compared to other techniques. We herein implemented Spatial and Temporal redundant probabilistic interpolation network to improve the accuracy and provide fault tolerance with the help of a low-power and area-efficient novel SHE-MTJ-based majority gate. Also, Progressive Modular Redundant Network is Proposed to enhance reduced footprint when compared with the Spatial modular Redundant network. Results show that the SHE-MTJ-based majority gate provides 32.1% area reduction and 54.5% energy reduction compared to the conventional CMOS-based design. Also, the simulation results show that the proposed model improved 36% in Error rate, in addition to latency improvements when compared with baseline models. An accuracy comparison of all the redundant models for two different topologies, 784x200x10, and 784x200x200x10, and for different activation functions including Sigmoid, Square root and Square indicate viability of the methods developed with respect to area and energy metrics

    Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses

    No full text
    Current large-scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy. New memory technologies, such as nanoscale two-terminal resistive switching memory devices, offer a compact, scalable, and low-power alternative that permits on-chip colocated processing and memory in fine-grain distributed parallel architecture. Here, we report the first use of resistive memory devices for implementing and training a restricted Boltzmann machine (RBM), a generative probabilistic graphical model as a key component for unsupervised learning in deep networks. We experimentally demonstrate a 45-synapse RBM realized with 90 resistive phase change memory (PCM) elements trained with a bioinspired variant of the contrastive divergence algorithm, implementing Hebbian and anti-Hebbian weight updates. The resistive PCM devices show a twofold to tenfold reduction in error rate in a missing pixel pattern completion task trained over 30 epochs, compared with untrained case. Measured programming energy consumption is 6.1 nJ per epoch with the PCM devices, a factor of 150 times lower than the conventional processor-memory systems. We analyze and discuss the dependence of learning performance on cycle-to-cycle variations and number of gradual levels in the PCM analog memory devices

    Training a Probabilistic Graphical Model With Resistive Switching Electronic Synapses

    No full text
    Current large scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy. Emerging memory technologies such as nanoscale two-terminal resistive switching memory devices offer a compact, scalable and low power alternative that permits on-chip co-located processing and memory in fine-grain distributed parallel architecture. Here we report first use of resistive switching memory devices for implementing and training a Restricted Boltzmann Machine (RBM), a generative probabilistic graphical model as a key component for unsupervised learning in deep networks. We experimentally demonstrate a 45-synapse RBM realized with 90 resistive switching phase change memory (PCM) elements trained with a bio-inspired variant of the Contrastive Divergence (CD) algorithm, implementing Hebbian and anti-Hebbian weight updates. The resistive PCM devices show a two-fold to ten-fold reduction in error rate in a missing pixel pattern completion task trained over 30 epochs, compared to untrained case. Measured programming energy consumption is 6.1 nJ per epoch with the resistive switching PCM devices, a factor of ~150 times lower than conventional processor-memory systems. We analyze and discuss the dependence of learning performance on cycle-to-cycle variations as well as number of gradual levels in the PCM analog memory devices.Comment: Accepted for publication in IEEE Transactions on Electron Devices. This version is the submitted versio
    corecore