588 research outputs found

    Compression and load balancing for efficient sparse matrix-vector product on multicore processors and graphics processing units

    Full text link
    [EN] We contribute to the optimization of the sparse matrix-vector product by introducing a variant of the coordinate sparse matrix format that balances the workload distribution and compresses both the indexing arrays and the numerical information. Our approach is multi-platform, in the sense that the realizations for (general-purpose) multicore processors as well as graphics accelerators (GPUs) are built upon common principles, but differ in the implementation details, which are adapted to avoid thread divergence in the GPU case or maximize compression element-wise (i.e., for each matrix entry) for multicore architectures. Our evaluation on the two last generations of NVIDIA GPUs as well as Intel and AMD processors demonstrate the benefits of the new kernels when compared with the optimized implementations of the sparse matrix-vector product in NVIDIA's cuSPARSE and Intel's MKL, respectively.J. I. Aliaga, E. S. Quintana-Ortí, and A. E. Tomás were supported by TIN2017-82972-R of the Spanish MINECO. H. Anzt and T. Grützmacher were supported by the Impuls und Vernetzungsfond of the Helmholtz Association under grant VH-NG-1241 and by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. The authors would like to thank the Steinbuch Centre for Computing (SCC) of the Karlsruhe Institute of Technology for providing access to an NVIDIA A100 GPU.Aliaga, JI.; Anzt, H.; Grützmacher, T.; Quintana-Ortí, ES.; Tomás Domínguez, AE. (2022). Compression and load balancing for efficient sparse matrix-vector product on multicore processors and graphics processing units. Concurrency and Computation: Practice and Experience. 34(14):1-13. https://doi.org/10.1002/cpe.6515113341

    Performance modeling of the sparse matrix-vector product via convolutional neural networks

    Full text link
    [EN] Modeling the execution time of the sparse matrix-vector multiplication (SpMV) on a current CPU architecture is especially complex due to (i) irregular memory accesses; (ii) indirect memory referencing; and (iii) low arithmetic intensity. While analytical models may yield accurate estimates for the total number of cache hits/misses, they often fail to predict accurately the total execution time. In this paper, we depart from the analytic approach to instead leverage convolutional neural networks (CNNs) in order to provide an effective estimation of the performance of the SpMV operation. For this purpose, we present a high-level abstraction of the sparsity pattern of the problem matrix and propose a blockwise strategy to feed the CNN models by blocks of nonzero elements. The experimental evaluation on a representative subset of the matrices from the SuiteSparse Matrix collection demonstrates the robustness of the CNN models for predicting the SpMV performance on an Intel Haswell core. Furthermore, we show how to generalize the network models to other target architectures to estimate the performance of SpMV on an ARM A57 coreThis work was supported by project TIN2017-82972-R from the MINECO, Spain. Manuel F. Dolz was also supported by the Plan GenT project CDEIGENT/2018/014 from the Generalitat Valenciana, Spain. Maria Barreda was also supported by the POSDOC-A/2017/11 project from the Universitat Jaume IBarreda, M.; Dolz, MF.; Castaño Alvarez, MA.; Alonso-Jordá, P.; Quintana-Orti, ES. (2020). Performance modeling of the sparse matrix-vector product via convolutional neural networks. The Journal of Supercomputing (Online). 76(11):8883-8900. https://doi.org/10.1007/s11227-020-03186-1S888389007611Abdelfattah A, Ltaief H, Keyes D (2015) High performance multi-GPU SpMV for multi-component PDE-based applications. In: Träff JL, Hunold S, Versaci F (eds) Euro-Par 2015: parallel processing. Springer, Berlin, pp 601–612Schiesser WE (2014) Computational mathematics in engineering and applied science: ODEs, DAEs, and PDEs. CRC Press, Boca RatonVuduc R, Demmel JW, Yelick KA (2005) OSKI: a library of automatically tuned sparse matrix kernels. J Phys Conf Ser 16:521–530Williams S, Oliker L, Vuduc R, Shalf J, Yelick K, Demmel J (2007) Optimization of sparse matrix–vector multiplication on emerging multicore platforms. In: SC ’07: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, pp 1–12Elafrou A, Goumas G, Koziris N (2017) Performance analysis and optimization of sparse matrix–vector multiplication on modern multi- and many-core processors. In: 2017 46th International Conference on Parallel Processing (ICPP), pp 292–301Li S, Chang H, Zhang J, Zhang Y (2015) Automatic tuning of sparse matrix–vector multiplication on multicore clusters. Sci China Inf Sci 58(9):1–14Guo P, Wang L (2015) Accurate cross-architecture performance modeling for sparse matri–vector multiplication (SpMV) on GPUs. Concurr Comput Pract Exp 27(13):3281–3294Li K, Yang W, Li K (2015) Performance analysis and optimization for SpMV on GPU using probabilistic modeling. IEEE Trans Parallel Distrib Syst 26(1):196–205Eijkhout V, Pozo R (1994) Data structures and algorithms for distributed sparse matrix operations. Technical reportGu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, Liu T, Wang X, Wang G, Cai J, Chen T (2018) Recent advances in convolutional neural networks. Pattern Recognit 77(C):354–377Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: Gordon G, Dunson D, Dudík M (eds) Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research. Fort Lauderdale, FL, USA, 11–13. PMLR, pp 315–323Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, Volume 37 (ICML’15). JMLR org, pp 448–456Keras: The Python Deep Learning library. https://keras.io/. Accessed Dec 2019TensorFlow, an open source machine learning library for research and production. https://www.tensorflow.org/. Accessed Dec 2019Keras + Hyperopt: a very simple wrapper for convenient hyperparameter optimization. http://maxpumperla.com/hyperas/. Accessed Dec 2019Bergstra J, Komer B, Eliasmith C, Yamins D, Cox D (2015) Hyperopt: a python library for model selection and hyperparameter optimization. Comput Sci Discov. https://doi.org/10.1088/1749-4699/8/1/014008Bergstra J, Yamins D, Cox DD (2013) Making a science of model search: hyperparameter optimization in hundreds of dimensions for vision architectures. In: Proceedings of the 30th International Conference on International Conference on Machine Learning—Volume 28, ICML’13. JMLR.org, pp I–115–I–123SuiteSparse Matrix Collection. https://sparse.tamu.edu/. Accessed Dec 2019Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer, BerlinPan SJ, Yang Qiang (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–44 05Götz M, Anzt H (2018) Machine learning-aided numerical linear algebra: convolutional neural networks for the efficient preconditioner generation. In: Procs of ScalA’18: 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems, WS at Supercomputing 2018, 11Zhao Y, Li J, Liao C, Shen X (2018) Bridging the gap between deep learning and sparse matrix format selection. SIGPLAN Not 53(1):94–108Cui H, Hirasawa S, Kobayashi H, Takizawa H (2018) A machine learning-based approach for selecting SpMV kernels and matrix storage formats. IEICE Trans Inf Syst E101.D(9):2307–2314Nisa I, Siegel C, Rajam AS, Vishnu A, Sadayappan P (2018) Effective machine learning based format selection and performance modeling for SpMV on GPUs. EasyChair Preprint no. 388, EasyChairTiwari A, Laurenzano MA, Carrington L, Snavely A (2012) Modeling power and energy usage of HPC kernels. In: 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops PhD Forum, pp 990–998Benatia A, Ji W, Wang Y, Shi F (2016) Machine learning approach for the predicting performance of SpMV on GPU. In: 2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS), pp 894–90

    From Sparse Matrix to Optimal GPU CUDA Sparse Matrix Vector Product Implementation

    Get PDF
    The CUDA model for GPUs presents the programmer with a plethora of different programming options. These includes different memory types, different memory access methods, and different data types. Identifying which options to use and when is a non-trivia

    Balanced and Compressed Coordinate Layout for the Sparse Matrix-Vector Product on GPUs

    Get PDF
    We contribute to the optimization of the sparse matrix-vector product on graphics processing units by introducing a variant of the coordinate sparse matrix layout that compresses the integer representation of the matrix indices. In addition, we employ a look-ahead table to avoid the storage of repeated numerical values in the sparse matrix, yielding a more compact data representation that is easier to maintain in the cache. Our evaluation on the two most recent generations of NVIDIA GPUs, the V100 and the A100 architectures, shows considerable performance improvements over the kernels for the sparse matrix-vector product in cuSPARSE (CUDA 11.0.167).This work was partially sponsored by the EU H2020 project 732631 OPRECOMP and project TIN2017-82972-R of the Spanish MINECO. Hartwig Anzt and Yuhsiang M. Tsai were supported by the “Impuls und Vernetzungsfond” of the Helmholtz Association under grant VH-NG-1241 and by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. The authors would like to thank the Steinbuch Centre for Computing (SCC) of the Karlsruhe Institute of Technology for providing access to an NVIDIA A100 GPU

    Parallel structurally-symmetric sparse matrix-vector products on multi-core processors

    Full text link
    We consider the problem of developing an efficient multi-threaded implementation of the matrix-vector multiplication algorithm for sparse matrices with structural symmetry. Matrices are stored using the compressed sparse row-column format (CSRC), designed for profiting from the symmetric non-zero pattern observed in global finite element matrices. Unlike classical compressed storage formats, performing the sparse matrix-vector product using the CSRC requires thread-safe access to the destination vector. To avoid race conditions, we have implemented two partitioning strategies. In the first one, each thread allocates an array for storing its contributions, which are later combined in an accumulation step. We analyze how to perform this accumulation in four different ways. The second strategy employs a coloring algorithm for grouping rows that can be concurrently processed by threads. Our results indicate that, although incurring an increase in the working set size, the former approach leads to the best performance improvements for most matrices.Comment: 17 pages, 17 figures, reviewed related work section, fixed typo

    Solving Lattice QCD systems of equations using mixed precision solvers on GPUs

    Full text link
    Modern graphics hardware is designed for highly parallel numerical tasks and promises significant cost and performance benefits for many scientific applications. One such application is lattice quantum chromodyamics (lattice QCD), where the main computational challenge is to efficiently solve the discretized Dirac equation in the presence of an SU(3) gauge field. Using NVIDIA's CUDA platform we have implemented a Wilson-Dirac sparse matrix-vector product that performs at up to 40 Gflops, 135 Gflops and 212 Gflops for double, single and half precision respectively on NVIDIA's GeForce GTX 280 GPU. We have developed a new mixed precision approach for Krylov solvers using reliable updates which allows for full double precision accuracy while using only single or half precision arithmetic for the bulk of the computation. The resulting BiCGstab and CG solvers run in excess of 100 Gflops and, in terms of iterations until convergence, perform better than the usual defect-correction approach for mixed precision.Comment: 30 pages, 7 figure

    Convolutional neural nets for estimating the run time and energy consumption of the sparse matrix-vector product

    Get PDF
    Modeling the performance and energy consumption of the sparse matrix-vector product (SpMV) is essential to perform off-line analysis and, for example, choose a target computer architecture that delivers the best performance-energy consumption ratio. However, this task is especially complex given the memory-bounded nature and irregular memory accesses of the SpMV, mainly dictated by the input sparse matrix. In this paper, we propose a Machine Learning (ML)-driven approach that leverages Convolutional Neural Networks (CNNs) to provide accurate estimations of the performance and energy consumption of the SpMV kernel. The proposed CNN-based models use a blockwise approach to make the CNN architecture independent of the matrix size. These models are trained to estimate execution time as well as total, package, and DRAM energy consumption at different processor frequencies. The experimental results reveal that the overall relative error ranges between 0.5% and 14%, while at matrix level is not superior to 10%. To demonstrate the applicability and accuracy of the SpMV CNN-based models, this study is complemented with an ad-hoc time-energy model for the PageRank algorithm, a popular algorithm for web information retrieval used by search engines, which internally realizes the SpMV kernel

    NUMA-Aware Strategies for the Heterogeneous Execution of SPMV on Modern Supercomputers

    Get PDF
    The sparse matrix-vector product is a widespread operation amongst the scientific computing community. It represents the dominant computational cost in many large-scale simulations relying on iterative methods, and its performance is sensitive to the sparse pattern, the storage format, and kernel implementation, and the target computing architecture. In this work, we are devoted to the efficient execution of the sparse matrix-vector product on (potentially hybrid) modern supercomputers with non-uniform memory access configurations. A hierarchical parallel implementation is proposed to minimize the number of processes participating in distributed-memory parallelization. As a result, a single process per computing node is enough to engage all its hardware and ensure efficient memory access on manycore platforms. The benefits of this approach have been demonstrated on up to 9,600 cores of MareNostrum 4 supercomputer, at Barcelona Supercomputing Center.The work of A. Gorobets has been funded by the Russian Science Foundation, project 19- 11-00299. The work of X. Alvarez-Farr ´ e, F. X. Trias and A. Oliva has been financially supported ´ by the ANUMESOL project (ENE2017-88697-R) by the Spanish Research Agency (Ministerio de Economía y Competitividad, Secretaría de Estado de Investigacion, Desarrollo e Inno- ´ vacion), and the FusionCAT project (001-P-001722) by the Government of Catalonia (RIS3CAT ´ FEDER). The studies of this work have been carried out using the MareNostrum 4 supercomputer of the Barcelona Supercomputing Center (projects IM-2020-2-0029 and IM-2020-3-0030); the TSUBAME3.0 supercomputer of the Global Scientific Information and Computing Center at Tokyo Institute of Technology; the Lomonosov-2 supercomputer of the shared research facilities of HPC computing resources at Lomonosov Moscow State University; the K-60 hybrid cluster of the collective use center of the Keldysh Institute of Applied Mathematics. The authors thankfully acknowledge these institutions for the compute time and technical support.Postprint (published version

    Runtime sparse matrix format selection

    No full text
    There exist many storage formats for the in-memory representation of sparse matrices. Choosing the format that yields the quickest processing of any given sparse matrix requires considering the exact non-zero structure of the matrix, as well as the current execution environment. Each of these factors can change at runtime. The matrix structure can vary as computation progresses, while the environment can change due to varying system load, the live migration of jobs across a heterogeneous cluster, etc. This paper describes an algorithm that learns at runtime how to map sparse matrices onto the format which provides the quickest sparse matrix-vector product calculation, and which can adapt to the hardware platform changing underfoot. We show multiplication times reduced by over 10% compared with the best non-adaptive format selection
    corecore