5 research outputs found
Adaptively restarted block Krylov subspace methods with low-synchronization skeletons
With the recent realization of exascale performace by Oak Ridge National
Laboratory's Frontier supercomputer, reducing communication in kernels like QR
factorization has become even more imperative. Low-synchronization Gram-Schmidt
methods, first introduced in [K. \'{S}wirydowicz, J. Langou, S. Ananthan, U.
Yang, and S. Thomas, Low Synchronization Gram-Schmidt and Generalized Minimum
Residual Algorithms, Numer. Lin. Alg. Appl., Vol. 28(2), e2343, 2020], have
been shown to improve the scalability of the Arnoldi method in high-performance
distributed computing. Block versions of low-synchronization Gram-Schmidt show
further potential for speeding up algorithms, as column-batching allows for
maximizing cache usage with matrix-matrix operations. In this work,
low-synchronization block Gram-Schmidt variants from [E. Carson, K. Lund, M.
Rozlo\v{z}n\'{i}k, and S. Thomas, Block Gram-Schmidt algorithms and their
stability properties, Lin. Alg. Appl., 638, pp. 150--195, 2022] are transformed
into block Arnoldi variants for use in block full orthogonalization methods
(BFOM) and block generalized minimal residual methods (BGMRES). An adaptive
restarting heuristic is developed to handle instabilities that arise with the
increasing condition number of the Krylov basis. The performance, accuracy, and
stability of these methods are assessed via a flexible benchmarking tool
written in MATLAB. The modularity of the tool additionally permits generalized
block inner products, like the global inner product
Randomized block Gram-Schmidt process for solution of linear systems and eigenvalue problems
We propose a block version of the randomized Gram-Schmidt process for
computing a QR factorization of a matrix. Our algorithm inherits the major
properties of its single-vector analogue from [Balabanov and Grigori, 2020]
such as higher efficiency than the classical Gram-Schmidt algorithm and
stability of the modified Gram-Schmidt algorithm, which can be refined even
further by using multi-precision arithmetic. As in [Balabanov and Grigori,
2020], our algorithm has an advantage of performing standard high-dimensional
operations, that define the overall computational cost, with a unit roundoff
independent of the dominant dimension of the matrix. This unique feature makes
the methodology especially useful for large-scale problems computed on
low-precision arithmetic architectures. Block algorithms are advantageous in
terms of performance as they are mainly based on cache-friendly matrix-wise
operations, and can reduce communication cost in high-performance computing.
The block Gram-Schmidt orthogonalization is the key element in the block
Arnoldi procedure for the construction of Krylov basis, which in its turn is
used in GMRES and Rayleigh-Ritz methods for the solution of linear systems and
clustered eigenvalue problems. In this article, we develop randomized versions
of these methods, based on the proposed randomized Gram-Schmidt algorithm, and
validate them on nontrivial numerical examples
An overview of block Gram-Schmidt methods and their stability properties
Block Gram-Schmidt algorithms serve as essential kernels in many scientific
computing applications, but for many commonly used variants, a rigorous
treatment of their stability properties remains open. This survey provides a
comprehensive categorization of block Gram-Schmidt algorithms, particularly
those used in Krylov subspace methods to build orthonormal bases one block
vector at a time. All known stability results are assembled, and new results
are summarized or conjectured for important communication-reducing variants.
Additionally, new block versions of low-synchronization variants are derived,
and their efficacy and stability are demonstrated for a wide range of
challenging examples. Low-synchronization variants appear remarkably stable for
s-step-like matrices built with Newton polynomials, pointing towards a new
stable and efficient backbone for Krylov subspace methods. Numerical examples
are computed with a versatile MATLAB package hosted at
https://github.com/katlund/BlockStab, and scripts for reproducing all results
in the paper are provided. Block Gram-Schmidt implementations in popular
software packages are discussed, along with a number of open problems. An
appendix containing all algorithms type-set in a uniform fashion is provided.Comment: 42 pages, 5 tables, 17 figures, 20 algorithm
Randomized block Gram-Schmidt process for solution of linear systems and eigenvalue problems
We propose a block version of the randomized Gram-Schmidt process for computing a QR factorization of a matrix. Our algorithm inherits the major properties of its single-vector analogue from [Balabanov and Grigori, 2020] such as higher efficiency than the classical Gram-Schmidt algorithm and stability of the modified Gram-Schmidt algorithm, which can be refined even further by using multi-precision arithmetic. As in [Balabanov and Grigori, 2020], our algorithm has an advantage of performing standard high-dimensional operations, that define the overall computational cost, with a unit roundoff independent of the dominant dimension of the matrix. This unique feature makes the methodology especially useful for large-scale problems computed on low-precision arithmetic architectures. Block algorithms are advantageous in terms of performance as they are mainly based on cache-friendly matrix-wise operations, and can reduce communication cost in high-performance computing. The block Gram-Schmidt orthogonalization is the key element in the block Arnoldi procedure for the construction of Krylov basis, which in its turn is used in GMRES and Rayleigh-Ritz methods for the solution of linear systems and clustered eigenvalue problems. In this article, we develop randomized versions of these methods, based on the proposed randomized Gram-Schmidt algorithm, and validate them on nontrivial numerical examples