1,045 research outputs found
Some observations on weighted GMRES
We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present new alternative implementations of the weighted Arnoldi algorithm which may be favorable in terms of computational complexity, and examine stability issues connected with these implementations. Two implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used
A flexible and adaptive Simpler GMRES with deflated restarting for shifted linear systems
In this paper, two efficient iterative algorithms based on the simpler GMRES
method are proposed for solving shifted linear systems. To make full use of the
shifted structure, the proposed algorithms utilizing the deflated restarting
strategy and flexible preconditioning can significantly reduce the number of
matrix-vector products and the elapsed CPU time. Numerical experiments are
reported to illustrate the performance and effectiveness of the proposed
algorithms.Comment: 17 pages. 9 Tables, 1 figure; Newly update: add some new numerical
results and correct some typos and syntax error
A fast semi-direct least squares algorithm for hierarchically block separable matrices
We present a fast algorithm for linear least squares problems governed by
hierarchically block separable (HBS) matrices. Such matrices are generally
dense but data-sparse and can describe many important operators including those
derived from asymptotically smooth radial kernels that are not too oscillatory.
The algorithm is based on a recursive skeletonization procedure that exposes
this sparsity and solves the dense least squares problem as a larger,
equality-constrained, sparse one. It relies on a sparse QR factorization
coupled with iterative weighted least squares methods. In essence, our scheme
consists of a direct component, comprised of matrix compression and
factorization, followed by an iterative component to enforce certain equality
constraints. At most two iterations are typically required for problems that
are not too ill-conditioned. For an HBS matrix with
having bounded off-diagonal block rank, the algorithm has optimal complexity. If the rank increases with the spatial dimension as is
common for operators that are singular at the origin, then this becomes
in 1D, in 2D, and
in 3D. We illustrate the performance of the method on
both over- and underdetermined systems in a variety of settings, with an
emphasis on radial basis function approximation and efficient updating and
downdating.Comment: 24 pages, 8 figures, 6 tables; to appear in SIAM J. Matrix Anal. App
- …