35 research outputs found

    In-situ Stochastic Training of MTJ Crossbar based Neural Networks

    Full text link
    Owing to high device density, scalability and non-volatility, Magnetic Tunnel Junction-based crossbars have garnered significant interest for implementing the weights of an artificial neural network. The existence of only two stable states in MTJs implies a high overhead of obtaining optimal binary weights in software. We illustrate that the inherent parallelism in the crossbar structure makes it highly appropriate for in-situ training, wherein the network is taught directly on the hardware. It leads to significantly smaller training overhead as the training time is independent of the size of the network, while also circumventing the effects of alternate current paths in the crossbar and accounting for manufacturing variations in the device. We show how the stochastic switching characteristics of MTJs can be leveraged to perform probabilistic weight updates using the gradient descent algorithm. We describe how the update operations can be performed on crossbars both with and without access transistors and perform simulations on them to demonstrate the effectiveness of our techniques. The results reveal that stochastically trained MTJ-crossbar NNs achieve a classification accuracy nearly same as that of real-valued-weight networks trained in software and exhibit immunity to device variations.Comment: Accepted for poster presentation in the 2018 ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED

    Robust smoothing of left-censored time series data with a dynamic linear model to infer SARS-CoV-2 RNA concentrations in wastewater

    Get PDF
    Wastewater sampling for the detection and monitoring of SARS-CoV-2 has been developed and applied at an unprecedented pace, however uncertainty remains when interpreting the measured viral RNA signals and their spatiotemporal variation. The proliferation of measurements that are below a quantifiable threshold, usually during non-endemic periods, poses a further challenge to interpretation and time-series analysis of the data. Inspired by research in the use of a custom Kalman smoother model to estimate the true level of SARS-CoV-2 RNA concentrations in wastewater, we propose an alternative left-censored dynamic linear model. Cross-validation of both models alongside a simple moving average, using data from 286 sewage treatment works across England, allows for a comprehensive validation of the proposed approach. The presented dynamic linear model is more parsimonious, has a faster computational time and is represented by a more flexible modelling framework than the equivalent Kalman smoother. Furthermore we show how the use of wastewater data, transformed by such models, correlates more closely with regional case rate positivity as published by the Office for National Statistics (ONS) Coronavirus (COVID-19) Infection Survey. The modelled output is more robust and is therefore capable of better complementing traditional surveillance than untransformed data or a simple moving average, providing additional confidence and utility for public health decision making. La détection et la surveillance du SARS-CoV-2 dans les eaux usées ont été développées et réalisées à un rythme sans précédent, mais l'interprétation des mesures de concentrations en ARN viral, et de leurs variations spatio-temporelles, pose question. En particulier, l'importante proportion de mesures en deçà du seuil de quantification, généralement pendant les périodes non endémiques, constitue un défi pour l'analyse de ces séries temporelles. Inspirés par un travail de recherche ayant produit un lisseur de Kalman adapté pour estimer les concentrations réelles en ARN de SARS-CoV-2 dans les eaux usées à partir de ce type de données, nous proposons un nouveau modèle linéaire dynamique avec censure à gauche. Une validation croisée de ces lisseurs, ainsi que d'un simple lissage par moyenne glissante, sur des données provenant de 286 stations d'épuration couvrant l'Angleterre, valide de façon complète l'approche proposée. Le modèle présenté est plus parcimonieux, offre un cadre de modélisation plus flexible et nécessite un temps de calcul réduit par rapport au Lisseur de Kalman équivalent. Les données issues des eaux usées ainsi lissées sont en outre plus fortement corrélées avec le taux d'incidence régional produit par le bureau des statistiques nationales (ONS) Coronavirus Infection Survey. Elles se montrent plus robustes que les données brutes, ou lissées par simple moyenne glissante, et donc plus à même de compléter la surveillance traditionnelle, renforçant ainsi la confiance en l'épidémiologie fondée sur les eaux usées et son utilité pour la prise de décisions de santé publique

    Efficient FPGA implementation of local binary convolutional neural network

    No full text
    Binarized Neural Networks (BNN) has shown a capability of performing various classification tasks while taking advantage of computational simplicity and memory saving. The problem with BNN, however, is a low accuracy on large convolutional neural networks (CNN). Local Binary Convolutional Neural Network (LBCNN) compensates accuracy loss of BNN by using standard convolutional layer together with binary convolutional layer and can achieve as high accuracy as standard AlexNet CNN. For the first time we propose FPGA hardware design architecture of LBCNN and address its unique challenges. We present performance and resource usage predictor along with design space exploration framework. Our architecture on LBCNN AlexNet shows 76.6% higher performance in terms of GOPS, 2.6X and 2.7X higher performance density in terms of GOPS/Slice, and GOPS/DSP compared to previous FPGA implementation of standard AlexNet CNN

    In‐Memory Binary Vector–Matrix Multiplication Based on Complementary Resistive Switches

    No full text
    In article number 2000134, Stephan Menzel and co‐workers explore a computation in‐memory concept for binary vector‐matrix multiplications based on complementary resistive switches. Experimental results on a small‐scale demonstrator are shown and the influence of device variability is investigated. The simulated inference of a 1‐layer fully connected binary neural network trained on the MNIST data set resulted in an accuracy of nearly 86%

    XoMA: Exclusive on-chip memory architecture for energy-efficient deep learning acceleration

    No full text
    State-of-the-art deep neural networks (DNNs) require hundreds of millions of multiply-accumulate (MAC) computations to perform inference, e.g. in image-recognition tasks. To improve the performance and energy efficiency, deep learning accelerators have been proposed, realized both on FPGAs and as custom ASICs. Generally, such accelerators comprise many parallel processing elements, capable of executing large numbers of concurrent MAC operations. From the energy perspective, however, most consumption arises due to memory accesses, both to off-chip external memory, and on-chip buffers. In this paper, we propose an on-chip DNN co-processor architecture where minimizing memory accesses is the primary design objective. To the maximum possible extent, off-chip memory accesses are eliminated, providing lowest-possible energy consumption for inference. Compared to a state-of-the-art ASIC, our architecture requires 36% fewer external memory accesses and 53% less energy consumption for low-latency image classification

    Imc: Energy-Eicient In-Memory Convolver For Accelerating Binarized Deep Neural Network

    No full text
    Deep Convolutional Neural Networks (CNNs) are widely employed in modern AI systems due to their unprecedented accuracy in object recognition and detection. However, it has been proven that the main bottleneck to improve large scale deep CNN based hardware implementation performance is massive data communication between processing units and o-chip memory. In this paper, we pave a way towards novel concept of in-memory convolver (IMC) that could implement the dominant convolution computation within main memory based on our proposed Spin Orbit Torque Magnetic Random Access Memory (SOT-MRAM) array architecture to greatly reduce data communication and thus accelerate Binary CNN (BCNN). The proposed architecture could simultaneously work as non-volatile memory and a recongurable in-memory logic (AND, OR) without add-on logic circuits to memory chip as in conventional logic-in-memory designs. The computed logic output could be also simply read out like a normal MRAM bit-cell using the shared memory peripheral circuits. We employ such intrinsic in-memory processing architecture to eciently process data within memory to greatly reduce power-hungry and long distance data communication concerning state-of-the-art BCNN hardware. The hardware mapping results show that IMC can process the Binarized AlexNet on ImageNet data-set favorably with 134.27 J/img where ∼ 16× and 9× lower energy and area are achieved, respectively, compared to RRAM-based BCNN. Furthermore, 21.5% reduction in data movement in term of main memory accesses is observed compared to CPU/DRAM baseline
    corecore