458 research outputs found
PiDRAM: A Holistic End-to-end FPGA-based Framework for Processing-in-DRAM
Processing-using-memory (PuM) techniques leverage the analog operation of
memory cells to perform computation. Several recent works have demonstrated PuM
techniques in off-the-shelf DRAM devices. Since DRAM is the dominant memory
technology as main memory in current computing systems, these PuM techniques
represent an opportunity for alleviating the data movement bottleneck at very
low cost. However, system integration of PuM techniques imposes non-trivial
challenges that are yet to be solved. Design space exploration of potential
solutions to the PuM integration challenges requires appropriate tools to
develop necessary hardware and software components. Unfortunately, current
specialized DRAM-testing platforms, or system simulators do not provide the
flexibility and/or the holistic system view that is necessary to deal with PuM
integration challenges.
We design and develop PiDRAM, the first flexible end-to-end framework that
enables system integration studies and evaluation of real PuM techniques.
PiDRAM provides software and hardware components to rapidly integrate PuM
techniques across the whole system software and hardware stack (e.g., necessary
modifications in the operating system, memory controller). We implement PiDRAM
on an FPGA-based platform along with an open-source RISC-V system. Using
PiDRAM, we implement and evaluate two state-of-the-art PuM techniques: in-DRAM
(i) copy and initialization, (ii) true random number generation. Our results
show that the in-memory copy and initialization techniques can improve the
performance of bulk copy operations by 12.6x and bulk initialization operations
by 14.6x on a real system. Implementing the true random number generator
requires only 190 lines of Verilog and 74 lines of C code using PiDRAM's
software and hardware components.Comment: To appear in ACM Transactions on Architecture and Code Optimizatio
Accelerating Time Series Analysis via Processing using Non-Volatile Memories
Time Series Analysis (TSA) is a critical workload for consumer-facing
devices. Accelerating TSA is vital for many domains as it enables the
extraction of valuable information and predict future events. The
state-of-the-art algorithm in TSA is the subsequence Dynamic Time Warping
(sDTW) algorithm. However, sDTW's computation complexity increases
quadratically with the time series' length, resulting in two performance
implications. First, the amount of data parallelism available is significantly
higher than the small number of processing units enabled by commodity systems
(e.g., CPUs). Second, sDTW is bottlenecked by memory because it 1) has low
arithmetic intensity and 2) incurs a large memory footprint. To tackle these
two challenges, we leverage Processing-using-Memory (PuM) by performing in-situ
computation where data resides, using the memory cells. PuM provides a
promising solution to alleviate data movement bottlenecks and exposes immense
parallelism.
In this work, we present MATSA, the first MRAM-based Accelerator for Time
Series Analysis. The key idea is to exploit magneto-resistive memory crossbars
to enable energy-efficient and fast time series computation in memory. MATSA
provides the following key benefits: 1) it leverages high levels of parallelism
in the memory substrate by exploiting column-wise arithmetic operations, and 2)
it significantly reduces the data movement costs performing computation using
the memory cells. We evaluate three versions of MATSA to match the requirements
of different environments (e.g., embedded, desktop, or HPC computing) based on
MRAM technology trends. We perform a design space exploration and demonstrate
that our HPC version of MATSA can improve performance by 7.35x/6.15x/6.31x and
energy efficiency by 11.29x/4.21x/2.65x over server CPU, GPU and PNM
architectures, respectively
Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Neural networks (NNs) are growing in importance and complexity. A neural
network's performance (and energy efficiency) can be bound either by
computation or memory resources. The processing-in-memory (PIM) paradigm, where
computation is placed near or within memory arrays, is a viable solution to
accelerate memory-bound NNs. However, PIM architectures vary in form, where
different PIM approaches lead to different trade-offs. Our goal is to analyze,
discuss, and contrast DRAM-based PIM architectures for NN performance and
energy efficiency. To do so, we analyze three state-of-the-art PIM
architectures: (1) UPMEM, which integrates processors and DRAM arrays into a
single 2D chip; (2) Mensa, a 3D-stack-based PIM architecture tailored for edge
devices; and (3) SIMDRAM, which uses the analog principles of DRAM to execute
bit-serial operations. Our analysis reveals that PIM greatly benefits
memory-bound NNs: (1) UPMEM provides 23x the performance of a high-end GPU when
the GPU requires memory oversubscription for a general matrix-vector
multiplication kernel; (2) Mensa improves energy efficiency and throughput by
3.0x and 3.1x over the Google Edge TPU for 24 Google edge NN models; and (3)
SIMDRAM outperforms a CPU/GPU by 16.7x/1.4x for three binary NNs. We conclude
that the ideal PIM architecture for NN models depends on a model's distinct
attributes, due to the inherent architectural design choices.Comment: This is an extended and updated version of a paper published in IEEE
Micro, pp. 1-14, 29 Aug. 2022. arXiv admin note: text overlap with
arXiv:2109.1432
Implementation and Evaluation of Deep Neural Networks in Commercially Available Processing in Memory Hardware
Deep Neural Networks (DNN), specifically Convolutional Neural Networks (CNNs) are often associated with a large number of data-parallel computations. Therefore, data-centric computing paradigms, such as Processing in Memory (PIM), are being widely explored for CNN acceleration applications. A recent PIM architecture, developed and commercialized by the UPMEM company, has demonstrated impressive performance boost over traditional CPU-based systems for a wide range of data parallel applications. However, the application domain of CNN acceleration is yet to be explored on this PIM platform. In this work, successful implementations of CNNs on the UPMEM PIM system are presented. Furthermore, multiple operation mapping schemes with different optimization goals are explored. Based on the data achieved from the physical implementation of the CNNs on the UPMEM system, key-takeaways for future implementations and further UPMEM improvements are presented. Finally, to compare UPMEM’s performance with other PIMs, a model is proposed that is capable of producing estimated performance results of PIMs given architectural parameters. The creation and usage of the model is covered in this work
- …