2 research outputs found
CMOS + stochastic nanomagnets: heterogeneous computers for probabilistic inference and learning
Extending Moore's law by augmenting complementary-metal-oxide semiconductor
(CMOS) transistors with emerging nanotechnologies (X) has become increasingly
important. Accelerating Monte Carlo algorithms that rely on random sampling
with such CMOS+X technologies could have significant impact on a large number
of fields from probabilistic machine learning, optimization to quantum
simulation. In this paper, we show the combination of stochastic magnetic
tunnel junction (sMTJ)-based probabilistic bits (p-bits) with versatile Field
Programmable Gate Arrays (FPGA) to design a CMOS + X (X = sMTJ) prototype. Our
approach enables high-quality true randomness that is essential for Monte Carlo
based probabilistic sampling and learning. Our heterogeneous computer
successfully performs probabilistic inference and asynchronous Boltzmann
learning, despite device-to-device variations in sMTJs. A comprehensive
comparison using a CMOS predictive process design kit (PDK) reveals that
compact sMTJ-based p-bits replace 10,000 transistors while dissipating two
orders of magnitude of less energy (2 fJ per random bit), compared to digital
CMOS p-bits. Scaled and integrated versions of our CMOS + stochastic nanomagnet
approach can significantly advance probabilistic computing and its applications
in various domains by providing massively parallel and truly random numbers
with extremely high throughput and energy-efficiency
Recommended from our members
CMOS plus stochastic nanomagnets enabling heterogeneous computers for probabilistic inference and learning.
Extending Moores law by augmenting complementary-metal-oxide semiconductor (CMOS) transistors with emerging nanotechnologies (X) has become increasingly important. One important class of problems involve sampling-based Monte Carlo algorithms used in probabilistic machine learning, optimization, and quantum simulation. Here, we combine stochastic magnetic tunnel junction (sMTJ)-based probabilistic bits (p-bits) with Field Programmable Gate Arrays (FPGA) to create an energy-efficient CMOS + X (X = sMTJ) prototype. This setup shows how asynchronously driven CMOS circuits controlled by sMTJs can perform probabilistic inference and learning by leveraging the algorithmic update-order-invariance of Gibbs sampling. We show how the stochasticity of sMTJs can augment low-quality random number generators (RNG). Detailed transistor-level comparisons reveal that sMTJ-based p-bits can replace up to 10,000 CMOS transistors while dissipating two orders of magnitude less energy. Integrated versions of our approach can advance probabilistic computing involving deep Boltzmann machines and other energy-based learning algorithms with extremely high throughput and energy efficiency