Article thumbnail

Exploring the potential for accelerating sparse matrix-vector product on a Processing-in-Memory architecture

By Annahita Youssefi

Abstract

As the importance of memory access delays on performance has mushroomed over the past few decades, researchers have begun exploring Processing-in-Memory (PIM) technology, which offers higher memory bandwidth, lower memory latency, and lower power consumption. In this study, we investigate whether an emerging PIM design from Sandia National Laboratories can boost performance for sparse matrix-vector product (SMVP). While SMVP is in the best-case bandwidth-bound, factors related to matrix structure and representation also limit performance. We analyze SMVP both in the context of an AMD Opteron processor and the Sandia PIM, exploring the performance limiters for each and the degree to which these can be ameliorated by data and code transformations. Over a range of sparse matrices, SMVP on the PIM outperformed the Opteron by a factor of 1.82. On the PIM, computational kernel and data structure transformations improved performance by almost 40% over conventional implementations using compressed-sparse row format

Topics: Computer science, Applied sciences
Year: 2009
OAI identifier: oai:scholarship.rice.edu:1911/61946

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

Suggested articles