20 research outputs found

    In-memory computing with emerging memory devices: Status and outlook

    Get PDF
    Supporting data for "In-memory computing with emerging memory devices: status and outlook", submitted to APL Machine Learning

    On-Chip Learning and Inference Acceleration of Sparse Representations

    Get PDF
    abstract: The past decade has seen a tremendous surge in running machine learning (ML) functions on mobile devices, from mere novelty applications to now indispensable features for the next generation of devices. While the mobile platform capabilities range widely, long battery life and reliability are common design concerns that are crucial to remain competitive. Consequently, state-of-the-art mobile platforms have become highly heterogeneous by combining a powerful CPUs with GPUs to accelerate the computation of deep neural networks (DNNs), which are the most common structures to perform ML operations. But traditional von Neumann architectures are not optimized for the high memory bandwidth and massively parallel computation demands required by DNNs. Hence, propelling research into non-von Neumann architectures to support the demands of DNNs. The re-imagining of computer architectures to perform efficient DNN computations requires focusing on the prohibitive demands presented by DNNs and alleviating them. The two central challenges for efficient computation are (1) large memory storage and movement due to weights of the DNN and (2) massively parallel multiplications to compute the DNN output. Introducing sparsity into the DNNs, where certain percentage of either the weights or the outputs of the DNN are zero, greatly helps with both challenges. This along with algorithm-hardware co-design to compress the DNNs is demonstrated to provide efficient solutions to greatly reduce the power consumption of hardware that compute DNNs. Additionally, exploring emerging technologies such as non-volatile memories and 3-D stacking of silicon in conjunction with algorithm-hardware co-design architectures will pave the way for the next generation of mobile devices. Towards the objectives stated above, our specific contributions include (a) an architecture based on resistive crosspoint array that can update all values stored and compute matrix vector multiplication in parallel within a single cycle, (b) a framework of training DNNs with a block-wise sparsity to drastically reduce memory storage and total number of computations required to compute the output of DNNs, (c) the exploration of hardware implementations of sparse DNNs and architectural guidelines to reduce power consumption for the implementations in monolithic 3D integrated circuits, and (d) a prototype chip in 65nm CMOS accelerator for long-short term memory networks trained with the proposed block-wise sparsity scheme.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Reconfigurable writing architecture for reliable RRAM operation in wide temperature ranges

    Get PDF
    Resistive switching memories [resistive RAM (RRAM)] are an attractive alternative to nonvolatile storage and nonconventional computing systems, but their behavior strongly depends on the cell features, driver circuit, and working conditions. In particular, the circuit temperature and writing voltage schemes become critical issues, determining resistive switching memories performance. These dependencies usually force a design time tradeoff among reliability, device endurance, and power consumption, thereby imposing nonflexible functioning schemes and limiting the system performance. In this paper, we present a writing architecture that ensures the correct operation no matter the working temperature and allows the dynamic load of application-oriented writing profiles. Thus, taking advantage of more efficient configurations, the system can be dynamically adapted to overcome RRAM intrinsic challenges. Several profiles are analyzed regarding power consumption, temperature-variations protection, and operation speed, showing speedups near 700x compared with other published drivers

    High-Density Solid-State Memory Devices and Technologies

    Get PDF
    This Special Issue aims to examine high-density solid-state memory devices and technologies from various standpoints in an attempt to foster their continuous success in the future. Considering that broadening of the range of applications will likely offer different types of solid-state memories their chance in the spotlight, the Special Issue is not focused on a specific storage solution but rather embraces all the most relevant solid-state memory devices and technologies currently on stage. Even the subjects dealt with in this Special Issue are widespread, ranging from process and design issues/innovations to the experimental and theoretical analysis of the operation and from the performance and reliability of memory devices and arrays to the exploitation of solid-state memories to pursue new computing paradigms

    Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing

    Get PDF
    Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system. This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea. The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems

    Efficient hardware implementations of bio-inspired networks

    Get PDF
    The human brain, with its massive computational capability and power efficiency in small form factor, continues to inspire the ultimate goal of building machines that can perform tasks without being explicitly programmed. In an effort to mimic the natural information processing paradigms observed in the brain, several neural network generations have been proposed over the years. Among the neural networks inspired by biology, second-generation Artificial or Deep Neural Networks (ANNs/DNNs) use memoryless neuron models and have shown unprecedented success surpassing humans in a wide variety of tasks. Unlike ANNs, third-generation Spiking Neural Networks (SNNs) closely mimic biological neurons by operating on discrete and sparse events in time called spikes, which are obtained by the time integration of previous inputs. Implementation of data-intensive neural network models on computers based on the von Neumann architecture is mainly limited by the continuous data transfer between the physically separated memory and processing units. Hence, non-von Neumann architectural solutions are essential for processing these memory-intensive bio-inspired neural networks in an energy-efficient manner. Among the non-von Neumann architectures, implementations employing non-volatile memory (NVM) devices are most promising due to their compact size and low operating power. However, it is non-trivial to integrate these nanoscale devices on conventional computational substrates due to their non-idealities, such as limited dynamic range, finite bit resolution, programming variability, etc. This dissertation demonstrates the architectural and algorithmic optimizations of implementing bio-inspired neural networks using emerging nanoscale devices. The first half of the dissertation focuses on the hardware acceleration of DNN implementations. A 4-layer stochastic DNN in a crossbar architecture with memristive devices at the cross point is analyzed for accelerating DNN training. This network is then used as a baseline to explore the impact of experimental memristive device behavior on network performance. Programming variability is found to have a critical role in determining network performance compared to other non-ideal characteristics of the devices. In addition, noise-resilient inference engines are demonstrated using stochastic memristive DNNs with 100 bits for stochastic encoding during inference and 10 bits for the expensive training. The second half of the dissertation focuses on a novel probabilistic framework for SNNs using the Generalized Linear Model (GLM) neurons for capturing neuronal behavior. This work demonstrates that probabilistic SNNs have comparable perform-ance against equivalent ANNs on two popular benchmarks - handwritten-digit classification and human activity recognition. Considering the potential of SNNs in energy-efficient implementations, a hardware accelerator for inference is proposed, termed as Spintronic Accelerator for Probabilistic SNNs (SpinAPS). The learning algorithm is optimized for a hardware friendly implementation and uses first-to-spike decoding scheme for low latency inference. With binary spintronic synapses and digital CMOS logic neurons for computations, SpinAPS achieves a performance improvement of 4x in terms of GSOPS/W/mm2^2 when compared to a conventional SRAM-based design. Collectively, this work demonstrates the potential of emerging memory technologies in building energy-efficient hardware architectures for deep and spiking neural networks. The design strategies adopted in this work can be extended to other spike and non-spike based systems for building embedded solutions having power/energy constraints

    Memristive System Based Image Processing Technology: A Review and Perspective

    Get PDF
    Copyright: © 2021 by the authors. As the acquisition, transmission, storage and conversion of images become more efficient, image data are increasing explosively. At the same time, the limitations of conventional computational processing systems based on the Von Neumann architecture continue to emerge, and thus, improving the efficiency of image processing has become a key issue that has bothered scholars working on images for a long time. Memristors with non-volatile, synapse-like, as well as integrated storage-and-computation properties can be used to build intelligent processing systems that are closer to the structure and function of biological brains. They are also of great significance when constructing new intelligent image processing systems with non-Von Neumann architecture and for achieving the integrated storage and computation of image data. Based on this, this paper analyses the mathematical models of memristors and discusses their applications in conventional image processing based on memristive systems as well as image processing based on memristive neural networks, to investigate the potential of memristive systems in image processing. In addition, recent advances and implications of memristive system-based image processing are presented comprehensively, and its development opportunities and challenges in different major areas are explored as well. By establishing a complete spectrum of image processing technologies based on memristive systems, this review attempts to provide a reference for future studies in the field, and it is hoped that scholars can promote its development through interdisciplinary academic exchanges and cooperationNational Natural Science Foundation of China (Grant U1909201, Grant 62001149); Natural Science Foundation of Zhejiang Province (Grant LQ21F010009)

    Investigating ferroelectric and metal-insulator phase transition devices for neuromorphic computing

    Get PDF
    Neuromorphic computing has been proposed to accelerate the computation for deep neural networks (DNNs). The objective of this thesis work is to investigate the ferroelectric and metal-insulator phase transition devices for neuromorphic computing. This thesis proposed and experimentally demonstrated the drain erase scheme in FeFET to enable the individual cell program/erase/inhibition for in-situ training in 3D NAND-like FeFET array. To achieve multi-level states for analog in-memory computing, the ferroelectric thin film needs to be partially switched. This thesis identified a new challenge of ferroelectric partial switching, namely “history effect” in minor loop dynamics. The experimental characterization of both FeCap and FeFET validated the history effect, suggesting that the intermediate states programming condition depends on the prior states that the device has gone through. A phase-field model was constructed to understand the origin. Such history effect was then modelled into the FeFET based neural network simulation and analyze its negative impact on the training accuracy and then propose a possible mitigation strategy. Apart from using FeFET as synaptic devices, using metal-insulator phase transition device, as neuron was also explored experimentally. A NbOx metal-insulator phase transition threshold switch was integrated at the edge of the crossbar array as an oscillation neuron. One promising application for FeFET+NbOx neuromorphic system is to implement quantum error correction (QEC) circuitry at 4K. Cryo-NeuroSim, a device-to-system modeling framework that calibrates data at cryogenic temperature was developed to benchmark the performance of the FeFET+NbOx neuromorphic system.Ph.D
    corecore