32 research outputs found

    The Tortoise and the Hare restart GMRES

    Get PDF
    When solving large nonsymmetric systems of linear equations with the restarted GMRES algorithm, one is inclined to select a relatively large restart parameter in the hope of mimicking the full GMRES process. Surprisingly, cases exist where small values of the restart parameter yield convergence in fewer iterations than larger values. Here, two simple examples are presented where GMRES(1) converges exactly in three iterations, while GMRES(2) stagnates. One of these examples reveals that GMRES(1) convergence can be extremely sensitive to small changes in the initial residual

    Some observations on weighted GMRES

    Get PDF
    We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present new alternative implementations of the weighted Arnoldi algorithm which may be favorable in terms of computational complexity, and examine stability issues connected with these implementations. Two implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used

    Mixed precision GMRES-based iterative refinement with recycling

    Get PDF
    summary:With the emergence of mixed precision hardware, mixed precision GMRES-based iterative refinement schemes for solving linear systems Ax=bAx=b have recently been developed. However, in certain settings, GMRES may require too many iterations per refinement step, making it potentially more expensive than the alternative of recomputing the LU factors in a higher precision. In this work, we incorporate the idea of Krylov subspace recycling, a well-known technique for reusing information across sequential invocations, of a Krylov subspace method into a mixed precision GMRES-based iterative refinement solver. The insight is that in each refinement step, we call preconditioned GMRES on a linear system with the same coefficient matrix AA. In this way, the GMRES solves in subsequent refinement steps can be accelerated by recycling information obtained from previous steps. We perform numerical experiments on various random dense problems, Toeplitz problems, and problems from real applications, which confirm the benefits of the recycling approach

    Linear Asymptotic Convergence of Anderson Acceleration: Fixed-Point Analysis

    Full text link
    We study the asymptotic convergence of AA(mm), i.e., Anderson acceleration with window size mm for accelerating fixed-point methods xk+1=q(xk)x_{k+1}=q(x_{k}), xk∈Rnx_k \in R^n. Convergence acceleration by AA(mm) has been widely observed but is not well understood. We consider the case where the fixed-point iteration function q(x)q(x) is differentiable and the convergence of the fixed-point method itself is root-linear. We identify numerically several conspicuous properties of AA(mm) convergence: First, AA(mm) sequences {xk}\{x_k\} converge root-linearly but the root-linear convergence factor depends strongly on the initial condition. Second, the AA(mm) acceleration coefficients Ξ²(k)\beta^{(k)} do not converge but oscillate as {xk}\{x_k\} converges to xβˆ—x^*. To shed light on these observations, we write the AA(mm) iteration as an augmented fixed-point iteration zk+1=Ξ¨(zk)z_{k+1} =\Psi(z_k), zk∈Rn(m+1)z_k \in R^{n(m+1)} and analyze the continuity and differentiability properties of Ξ¨(z)\Psi(z) and Ξ²(z)\beta(z). We find that the vector of acceleration coefficients Ξ²(z)\beta(z) is not continuous at the fixed point zβˆ—z^*. However, we show that, despite the discontinuity of Ξ²(z)\beta(z), the iteration function Ξ¨(z)\Psi(z) is Lipschitz continuous and directionally differentiable at zβˆ—z^* for AA(1), and we generalize this to AA(mm) with m>1m>1 for most cases. Furthermore, we find that Ξ¨(z)\Psi(z) is not differentiable at zβˆ—z^*. We then discuss how these theoretical findings relate to the observed convergence behaviour of AA(mm). The discontinuity of Ξ²(z)\beta(z) at zβˆ—z^* allows Ξ²(k)\beta^{(k)} to oscillate as {xk}\{x_k\} converges to xβˆ—x^*, and the non-differentiability of Ξ¨(z)\Psi(z) allows AA(mm) sequences to converge with root-linear convergence factors that strongly depend on the initial condition. Additional numerical results illustrate our findings
    corecore