59 research outputs found
In-memory eigenvector computation in time O(1)
In-memory computing with crosspoint resistive memory arrays has gained
enormous attention to accelerate the matrix-vector multiplication in the
computation of data-centric applications. By combining a crosspoint array and
feedback amplifiers, it is possible to compute matrix eigenvectors in one step
without algorithmic iterations. In this work, time complexity of the
eigenvector computation is investigated, based on the feedback analysis of the
crosspoint circuit. The results show that the computing time of the circuit is
determined by the mismatch degree of the eigenvalues implemented in the
circuit, which controls the rising speed of output voltages. For a dataset of
random matrices, the time for computing the dominant eigenvector in the circuit
is constant for various matrix sizes, namely the time complexity is O(1). The
O(1) time complexity is also supported by simulations of PageRank of real-world
datasets. This work paves the way for fast, energy-efficient accelerators for
eigenvector computation in a wide range of practical applications.Comment: Accepted by Adv. Intell. Sys
Time complexity of in-memory solution of linear systems
In-memory computing with crosspoint resistive memory arrays has been shown to
accelerate data-centric computations such as the training and inference of deep
neural networks, thanks to the high parallelism endowed by physical rules in
the electrical circuits. By connecting crosspoint arrays with negative feedback
amplifiers, it is possible to solve linear algebraic problems such as linear
systems and matrix eigenvectors in just one step. Based on the theory of
feedback circuits, we study the dynamics of the solution of linear systems
within a memory array, showing that the time complexity of the solution is free
of any direct dependence on the problem size N, rather it is governed by the
minimal eigenvalue of an associated matrix of the coefficient matrix. We show
that, when the linear system is modeled by a covariance matrix, the time
complexity is O(logN) or O(1). In the case of sparse positive-definite linear
systems, the time complexity is solely determined by the minimal eigenvalue of
the coefficient matrix. These results demonstrate the high speed of the circuit
for solving linear systems in a wide range of applications, thus supporting
in-memory computing as a strong candidate for future big data and machine
learning accelerators.Comment: Accepted by IEEE Trans. Electron Devices. The authors thank Scott
Aaronson for helpful discussion about time complexit
Resistive Switching Device Technology Based on Silicon Oxide for Improved ON-OFF Ratio--Part II: Select Devices
The cross-point architecture for memory arrays is widely considered as one of the most attractive solutions for storage and memory circuits thanks to simplicity, scalability, small cell size, and consequently high density and low cost. Cost-scalable vertical 3-D cross-point architectures, in particular, offer the opportunity to challenge Flash memory with comparable density and cost. To develop scalable cross-point arrays, however, select devices with sufficient ON-OFF ratio, current capability, and endurance must be available. This paper presents a select device technology based on volatile resistive switching with Cu and Ag top electrode and silicon oxide (SiOₓ) switching materials. The select device displays ultrahigh resistance window and good current capability exceeding 2 MAcm⁻². Retention study shows a stochastic voltage-dependent ON-OFF transition time in the 10 μs-1 ms range, which needs to be further optimized for fast memory operation in storage class memory arrays
- …