2 research outputs found
Realizing In-Memory Baseband Processing for Ultra-Fast and Energy-Efficient 6G
To support emerging applications ranging from holographic communications to
extended reality, next-generation mobile wireless communication systems require
ultra-fast and energy-efficient baseband processors. Traditional complementary
metal-oxide-semiconductor (CMOS)-based baseband processors face two challenges
in transistor scaling and the von Neumann bottleneck. To address these
challenges, in-memory computing-based baseband processors using resistive
random-access memory (RRAM) present an attractive solution. In this paper, we
propose and demonstrate RRAM-implemented in-memory baseband processing for the
widely adopted multiple-input-multiple-output orthogonal frequency division
multiplexing (MIMO-OFDM) air interface. Its key feature is to execute the key
operations, including discrete Fourier transform (DFT) and MIMO detection using
linear minimum mean square error (L-MMSE) and zero forcing (ZF), in one-step.
In addition, RRAM-based channel estimation module is proposed and discussed. By
prototyping and simulations, we demonstrate the feasibility of RRAM-based
full-fledged communication system in hardware, and reveal it can outperform
state-of-the-art baseband processors with a gain of 91.2 in latency and
671 in energy efficiency by large-scale simulations. Our results pave a
potential pathway for RRAM-based in-memory computing to be implemented in the
era of the sixth generation (6G) mobile communications.Comment: arXiv admin note: text overlap with arXiv:2205.0356
X-TIME: An in-memory engine for accelerating machine learning on tabular data with CAMs
Structured, or tabular, data is the most common format in data science. While
deep learning models have proven formidable in learning from unstructured data
such as images or speech, they are less accurate than simpler approaches when
learning from tabular data. In contrast, modern tree-based Machine Learning
(ML) models shine in extracting relevant information from structured data. An
essential requirement in data science is to reduce model inference latency in
cases where, for example, models are used in a closed loop with simulation to
accelerate scientific discovery. However, the hardware acceleration community
has mostly focused on deep neural networks and largely ignored other forms of
machine learning. Previous work has described the use of an analog content
addressable memory (CAM) component for efficiently mapping random forests. In
this work, we focus on an overall analog-digital architecture implementing a
novel increased precision analog CAM and a programmable network on chip
allowing the inference of state-of-the-art tree-based ML models, such as
XGBoost and CatBoost. Results evaluated in a single chip at 16nm technology
show 119x lower latency at 9740x higher throughput compared with a
state-of-the-art GPU, with a 19W peak power consumption