4 research outputs found
A computational study of sparse matrix storage schemes
xi, 76 leaves : ill. ; 29 cm.The efficiency of linear algebra operations for sparse matrices on modern high performance
computing system is often constrained by the available memory bandwidth. We are interested
in sparse matrices whose sparsity pattern is unknown. In this thesis, we study the
efficiency of major storage schemes of sparse matrices during multiplication with dense
vector. A proper reordering of columns or rows usually results in reduced memory traffic
due to the improved data reuse. This thesis also proposes an efficient column ordering
algorithm based on binary reflected gray code. Computational experiments show that this
ordering results in increased performance in computing the product of a sparse matrix with
a dense vector
On Tuning The Symmetric Sparse Matrix Vector Multiplication With Csr And Tjds
In this work we present a heuristic to select the appropriate compressed storage format when computing the symmetric SpMV multiplication sequentially. A subset of symmetric sparse matrices were selected from the SPARSITY benchmark suite and extended with other matrices we consider complement them. All matrices were collected from Matrix Market and UF matrix collection. Experimental evidence shows that given a symmetric sparse matrix, predicting what is the more convenient format to use for computing the symmetric SpMV multiplication could be possible. According to our findings, and good rule of thumb, if the average number of non zero coefficients per column (row) is less than 3.5, then the symmetric SpMV multiplication runs up to 1.6× faster using the TJDS format compared to CSR