14 research outputs found

    In-Memory Principal Component Analysis by Analogue Closed-Loop Eigendecomposition

    Get PDF
    Machine learning (ML) techniques such as principal component analysis (PCA) have become pivotal in enabling efficient processing of big data in an increasing number of applications. However, the data-intensive computation in PCA causes large energy consumption in conventional von Neumann computers. In-memory computing (IMC) significantly improves throughput and energy efficiency by eliminating the physical separation between memory and processing units. Here, we present a novel closed-loop IMC circuit to compute real eigenvalues and eigenvectors of a target matrix allowing IMC-based acceleration of PCA. We benchmark its performance against a commercial GPU, achieving comparable accuracy and throughput while simultaneously securing Ă—10000 energy and Ă—100Ă·10000 area efficiency improvements. These results support IMC as a leading candidate architecture for energy-efficient ML accelerators

    Time complexity of in-memory solution of linear systems

    Get PDF
    In-memory computing with crosspoint resistive memory arrays has been shown to accelerate data-centric computations such as the training and inference of deep neural networks, thanks to the high parallelism endowed by physical rules in the electrical circuits. By connecting crosspoint arrays with negative feedback amplifiers, it is possible to solve linear algebraic problems such as linear systems and matrix eigenvectors in just one step. Based on the theory of feedback circuits, we study the dynamics of the solution of linear systems within a memory array, showing that the time complexity of the solution is free of any direct dependence on the problem size N, rather it is governed by the minimal eigenvalue of an associated matrix of the coefficient matrix. We show that, when the linear system is modeled by a covariance matrix, the time complexity is O(logN) or O(1). In the case of sparse positive-definite linear systems, the time complexity is solely determined by the minimal eigenvalue of the coefficient matrix. These results demonstrate the high speed of the circuit for solving linear systems in a wide range of applications, thus supporting in-memory computing as a strong candidate for future big data and machine learning accelerators.Comment: Accepted by IEEE Trans. Electron Devices. The authors thank Scott Aaronson for helpful discussion about time complexit

    Reservoir Computing with Charge-Trap Memory Based on a MoS2 Channel for Neuromorphic Engineering

    Get PDF
    Novel memory devices are essential for developing low power, fast, and accurate in-memory computing and neuromorphic engineering concepts that can compete with the conventional complementary metal-oxide-semiconductor (CMOS) digital processors. 2D semiconductors provide a novel platform for advanced semiconductors with atomic thickness, low-current operation, and capability of 3D integration. This work presents a charge-trap memory (CTM) device with a MoS2 channel where memory operation arises, thanks to electron trapping/detrapping at interface states. Transistor operation, memory characteristics, and synaptic potentiation/depression for neuromorphic applications are demonstrated. The CTM device shows outstanding linearity of the potentiation by applied drain pulses of equal amplitude. Finally, pattern recognition is demonstrated by reservoir computing where the input pattern is applied as a stimulation of the MoS2-based CTMs, while the output current after stimulation is processed by a feedforward readout network. The good accuracy, the low current operation, and the robustness to input random bit flip makes the CTM device a promising technology for future high-density neuromorphic computing concepts

    A Generalized Block-Matrix Circuit for Closed-Loop Analog In-Memory Computing

    Get PDF
    Matrix-based computing is ubiquitous in an increasing number of present-day machine learning applications such as neural networks, regression, and 5G communications. Conventional systems based on von-Neumann architecture are limited by the energy and latency bottleneck induced by the physical separation of the processing and memory units. In-memory computing (IMC) is a novel paradigm where computation is performed directly within the memory, thus eliminating the need for constant data transfer. IMC has shown exceptional throughput and energy efficiency when coupled with crosspoint arrays of resistive memory devices in open-loop matrix-vector-multiplication and closed-loop inverse-matrix-vector multiplication (IMVM) accelerators. However, each application results in a different circuit topology, thus complicating the development of reconfigurable, general-purpose IMC systems. In this article, we present a generalized closed-loop IMVM circuit capable of performing any linear matrix operation by proper memory remapping. We derive closed-form equations for the ideal input-output transfer functions, static error, and dynamic behavior, introducing a novel continuous-time analytical model allowing for orders-of-magnitude simulation speedup with respect to SPICE-based solvers. The proposed circuit represents an ideal candidate for general-purpose accelerators of machine learning

    A Spiking Recurrent Neural Network with Phase Change Memory Synapses for Decision Making

    Get PDF
    Pedretti G, Milo V, Hashemkhani S, et al. A Spiking Recurrent Neural Network with Phase Change Memory Synapses for Decision Making. Presented at the 2020 IEEE International Symposium on Circuits & Systems, Seville, Spain
    corecore