2 research outputs found
Adaptively restarted block Krylov subspace methods with low-synchronization skeletons
With the recent realization of exascale performace by Oak Ridge National
Laboratory's Frontier supercomputer, reducing communication in kernels like QR
factorization has become even more imperative. Low-synchronization Gram-Schmidt
methods, first introduced in [K. \'{S}wirydowicz, J. Langou, S. Ananthan, U.
Yang, and S. Thomas, Low Synchronization Gram-Schmidt and Generalized Minimum
Residual Algorithms, Numer. Lin. Alg. Appl., Vol. 28(2), e2343, 2020], have
been shown to improve the scalability of the Arnoldi method in high-performance
distributed computing. Block versions of low-synchronization Gram-Schmidt show
further potential for speeding up algorithms, as column-batching allows for
maximizing cache usage with matrix-matrix operations. In this work,
low-synchronization block Gram-Schmidt variants from [E. Carson, K. Lund, M.
Rozlo\v{z}n\'{i}k, and S. Thomas, Block Gram-Schmidt algorithms and their
stability properties, Lin. Alg. Appl., 638, pp. 150--195, 2022] are transformed
into block Arnoldi variants for use in block full orthogonalization methods
(BFOM) and block generalized minimal residual methods (BGMRES). An adaptive
restarting heuristic is developed to handle instabilities that arise with the
increasing condition number of the Krylov basis. The performance, accuracy, and
stability of these methods are assessed via a flexible benchmarking tool
written in MATLAB. The modularity of the tool additionally permits generalized
block inner products, like the global inner product
An overview of block Gram-Schmidt methods and their stability properties
Block Gram-Schmidt algorithms serve as essential kernels in many scientific
computing applications, but for many commonly used variants, a rigorous
treatment of their stability properties remains open. This survey provides a
comprehensive categorization of block Gram-Schmidt algorithms, particularly
those used in Krylov subspace methods to build orthonormal bases one block
vector at a time. All known stability results are assembled, and new results
are summarized or conjectured for important communication-reducing variants.
Additionally, new block versions of low-synchronization variants are derived,
and their efficacy and stability are demonstrated for a wide range of
challenging examples. Low-synchronization variants appear remarkably stable for
s-step-like matrices built with Newton polynomials, pointing towards a new
stable and efficient backbone for Krylov subspace methods. Numerical examples
are computed with a versatile MATLAB package hosted at
https://github.com/katlund/BlockStab, and scripts for reproducing all results
in the paper are provided. Block Gram-Schmidt implementations in popular
software packages are discussed, along with a number of open problems. An
appendix containing all algorithms type-set in a uniform fashion is provided.Comment: 42 pages, 5 tables, 17 figures, 20 algorithm