210 research outputs found
Accelerate & Actualize: Can 2D Materials Bridge the Gap Between Neuromorphic Hardware and the Human Brain?
Two-dimensional (2D) materials present an exciting opportunity for devices
and systems beyond the von Neumann computing architecture paradigm due to their
diversity of electronic structure, physical properties, and atomically-thin,
van der Waals structures that enable ease of integration with conventional
electronic materials and silicon-based hardware. All major classes of
non-volatile memory (NVM) devices have been demonstrated using 2D materials,
including their operation as synaptic devices for applications in neuromorphic
computing hardware. Their atomically-thin structure, superior physical
properties, i.e., mechanical strength, electrical and thermal conductivity, as
well as gate-tunable electronic properties provide performance advantages and
novel functionality in NVM devices and systems. However, device performance and
variability as compared to incumbent materials and technology remain major
concerns for real applications. Ultimately, the progress of 2D materials as a
novel class of electronic materials and specifically their application in the
area of neuromorphic electronics will depend on their scalable synthesis in
thin-film form with desired crystal quality, defect density, and phase purity.Comment: Neuromorphic Computing, 2D Materials, Heterostructures, Emerging
Memory Devices, Resistive, Phase-Change, Ferroelectric, Ferromagnetic,
Crossbar Array, Machine Learning, Deep Learning, Spiking Neural Network
Accuracy and Resiliency of Analog Compute-in-Memory Inference Engines
Recently, analog compute-in-memory (CIM) architectures based on emerging
analog non-volatile memory (NVM) technologies have been explored for deep
neural networks (DNN) to improve energy efficiency. Such architectures,
however, leverage charge conservation, an operation with infinite resolution,
and thus are susceptible to errors. The computations in DNN realized by analog
NVM thus have high uncertainty due to the device stochasticity. Several reports
have demonstrated the use of analog NVM for CIM in a limited scale. It is
unclear whether the uncertainties in computations will prohibit large-scale
DNNs. To explore this critical issue of scalability, this paper first presents
a simulation framework to evaluate the feasibility of large-scale DNNs based on
CIM architecture and analog NVM. Simulation results show that DNNs trained for
high-precision digital computing engines are not resilient against the
uncertainty of the analog NVM devices. To avoid such catastrophic failures,
this paper introduces the analog floating-point representation for the DNN, and
the Hessian-Aware Stochastic Gradient Descent (HA-SGD) training algorithm to
enhance the inference accuracy of trained DNNs. As a result of such
enhancements, DNNs such as Wide ResNets for the CIFAR-100 image recognition
problem are demonstrated to have significant performance improvements in
accuracy without adding cost to the inference hardware
An Analog VLSI Deep Machine Learning Implementation
Machine learning systems provide automated data processing and see a wide range of applications. Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the âcurse of dimensionality,â which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog signal processing (ASP) can yield much higher energy efficiency than digital signal processing (DSP), presenting means of overcoming these limitations.
The purpose of this work is to develop an analog implementation of DML system.
First, an analog memory is proposed as an essential component of the learning systems. It uses the charge trapped on the floating gate to store analog value in a non-volatile way. The memory is compatible with standard digital CMOS process and allows random-accessible bi-directional updates without the need for on-chip charge pump or high voltage switch.
Second, architecture and circuits are developed to realize an online k-means clustering algorithm in analog signal processing. It achieves automatic recognition of underlying data pattern and online extraction of data statistical parameters. This unsupervised learning system constitutes the computation node in the deep machine learning hierarchy.
Third, a 3-layer, 7-node analog deep machine learning engine is designed featuring online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes massively parallel reconfigurable current-mode analog architecture to realize efficient computation. And algorithm-level feedback is leveraged to provide robustness to circuit imperfections in analog signal processing. At a processing speed of 8300 input vectors per second, it achieves 1Ă1012 operation per second per Watt of peak energy efficiency.
In addition, an ultra-low-power tunable bump circuit is presented to provide similarity measures in analog signal processing. It incorporates a novel wide-input-range tunable pseudo-differential transconductor. The circuit demonstrates tunability of bump center, width and height with a power consumption significantly lower than previous works
Neuromorphic computing using non-volatile memory
Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first review recent advances in the application of NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and âMemcomputingâ. In SNNs, NVM synaptic connections are updated by a local learning rule such as spike-timing-dependent-plasticity, a computational approach directly inspired by biology. For DNNs, NVM arrays can represent matrices of synaptic weights, implementing the matrixâvector multiplication needed for algorithms such as backpropagation in an analog yet massively-parallel fashion. This approach could provide significant improvements in power and speed compared to GPU-based DNN training, for applications of commercial significance. We then survey recent research in which different types of NVM devices â including phase change memory, conductive-bridging RAM, filamentary and non-filamentary RRAM, and other NVMs â have been proposed, either as a synapse or as a neuron, for use within a neuromorphic computing application. The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.11Yscopu
- âŠ