32 research outputs found
The Tortoise and the Hare restart GMRES
When solving large nonsymmetric systems of linear equations with the restarted GMRES algorithm, one is inclined to select a relatively large restart parameter in the hope of mimicking the full GMRES process. Surprisingly, cases exist where small values of the restart parameter yield convergence in fewer iterations than larger values. Here, two simple examples are presented where GMRES(1) converges exactly in three iterations, while GMRES(2) stagnates. One of these examples reveals that GMRES(1) convergence can be extremely sensitive to small changes in the initial residual
Some observations on weighted GMRES
We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present new alternative implementations of the weighted Arnoldi algorithm which may be favorable in terms of computational complexity, and examine stability issues connected with these implementations. Two implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used
Mixed precision GMRES-based iterative refinement with recycling
summary:With the emergence of mixed precision hardware, mixed precision GMRES-based iterative refinement schemes for solving linear systems have recently been developed. However, in certain settings, GMRES may require too many iterations per refinement step, making it potentially more expensive than the alternative of recomputing the LU factors in a higher precision. In this work, we incorporate the idea of Krylov subspace recycling, a well-known technique for reusing information across sequential invocations, of a Krylov subspace method into a mixed precision GMRES-based iterative refinement solver. The insight is that in each refinement step, we call preconditioned GMRES on a linear system with the same coefficient matrix . In this way, the GMRES solves in subsequent refinement steps can be accelerated by recycling information obtained from previous steps. We perform numerical experiments on various random dense problems, Toeplitz problems, and problems from real applications, which confirm the benefits of the recycling approach
Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis
We study the asymptotic convergence of AA(), i.e., Anderson acceleration
with window size for accelerating fixed-point methods ,
. Convergence acceleration by AA() has been widely observed but
is not well understood. We consider the case where the fixed-point iteration
function is differentiable and the convergence of the fixed-point method
itself is root-linear. We identify numerically several conspicuous properties
of AA() convergence: First, AA() sequences converge
root-linearly but the root-linear convergence factor depends strongly on the
initial condition. Second, the AA() acceleration coefficients
do not converge but oscillate as converges to . To shed light on
these observations, we write the AA() iteration as an augmented fixed-point
iteration , and analyze the continuity
and differentiability properties of and . We find that the
vector of acceleration coefficients is not continuous at the fixed
point . However, we show that, despite the discontinuity of ,
the iteration function is Lipschitz continuous and directionally
differentiable at for AA(1), and we generalize this to AA() with
for most cases. Furthermore, we find that is not differentiable at
. We then discuss how these theoretical findings relate to the observed
convergence behaviour of AA(). The discontinuity of at
allows to oscillate as converges to , and the
non-differentiability of allows AA() sequences to converge with
root-linear convergence factors that strongly depend on the initial condition.
Additional numerical results illustrate our findings