1 research outputs found
Optimizing Sparse Matrix-Vector Multiplication on Emerging Many-Core Architectures
Sparse matrix vector multiplication (SpMV) is one of the most common
operations in scientific and high-performance applications, and is often
responsible for the application performance bottleneck. While the sparse matrix
representation has a significant impact on the resulting application
performance, choosing the right representation typically relies on expert
knowledge and trial and error. This paper provides the first comprehensive
study on the impact of sparse matrix representations on two emerging many-core
architectures: the Intel's Knights Landing (KNL) XeonPhi and the ARM-based
FT-2000Plus (FTP). Our large-scale experiments involved over 9,500 distinct
profiling runs performed on 956 sparse datasets and five mainstream SpMV
representations. We show that the best sparse matrix representation depends on
the underlying architecture and the program input. To help developers to choose
the optimal matrix representation, we employ machine learning to develop a
predictive model. Our model is first trained offline using a set of training
examples. The learned model can be used to predict the best matrix
representation for any unseen input for a given architecture. We show that our
model delivers on average 95% and 91% of the best available performance on KNL
and FTP respectively, and it achieves this with no runtime profiling overhead.Comment: Accepted to be published at HPCC 201