190 research outputs found

    Multipreconditioned GMRES for Shifted Systems

    Get PDF
    An implementation of GMRES with multiple preconditioners (MPGMRES) is proposed for solving shifted linear systems with shift-and-invert preconditioners. With this type of preconditioner, the Krylov subspace can be built without requiring the matrix-vector product with the shifted matrix. Furthermore, the multipreconditioned search space is shown to grow only linearly with the number of preconditioners. This allows for a more efficient implementation of the algorithm. The proposed implementation is tested on shifted systems that arise in computational hydrology and the evaluation of different matrix functions. The numerical results indicate the effectiveness of the proposed approach.U.S. National Science Foundation under grant DMS–1418882 and and by the Department of Energy grant DE–SC00165

    Block GMRES method with inexact breakdowns and deflated restarting

    Get PDF
    International audienc

    How descriptive are GMRES convergence bounds?

    Get PDF
    Eigenvalues with the eigenvector condition number, the field of values, and pseudospectra have all been suggested as the basis for convergence bounds for minimum residual Krylov subspace methods applied to non-normal coefficient matrices. This paper analyzes and compares these bounds, illustrating with six examples the success and failure of each one. Refined bounds based on eigenvalues and the field of values are suggested to handle low-dimensional non-normality. It is observed that pseudospectral bounds can capture multiple convergence stages. Unfortunately, computation of pseudospectra can be rather expensive. This motivates an adaptive technique for estimating GMRES convergence based on approximate pseudospectra taken from the Arnoldi process that is the basis for GMRES

    70 years of Krylov subspace methods: The journey continues

    Full text link
    Using computed examples for the Conjugate Gradient method and GMRES, we recall important building blocks in the understanding of Krylov subspace methods over the last 70 years. Each example consists of a description of the setup and the numerical observations, followed by an explanation of the observed phenomena, where we keep technical details as small as possible. Our goal is to show the mathematical beauty and hidden intricacies of the methods, and to point out some persistent misunderstandings as well as important open problems. We hope that this work initiates further investigations of Krylov subspace methods, which are efficient computational tools and exciting mathematical objects that are far from being fully understood.Comment: 38 page

    Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

    Get PDF
    Many methods for computing eigenvalues of a large sparse matrix involve shift-invert transformations which require the solution of a shifted linear system at each step. This thesis deals with shift-invert iterative techniques for solving eigenvalue problems where the arising linear systems are solved inexactly using a second iterative technique. This approach leads to an inner-outer type algorithm. We provide convergence results for the outer iterative eigenvalue computation as well as techniques for efficient inner solves. In particular eigenvalue computations using inexact inverse iteration, the Jacobi-Davidson method without subspace expansion and the shift-invert Arnoldi method as a subspace method are investigated in detail. A general convergence result for inexact inverse iteration for the non-Hermitian generalised eigenvalue problem is given, using only minimal assumptions. This convergence result is obtained in two different ways; on the one hand, we use an equivalence result between inexact inverse iteration applied to the generalised eigenproblem and modified Newton's method; on the other hand, a splitting method is used which generalises the idea of orthogonal decomposition. Both approaches also include an analysis for the convergence theory of a version of inexact Jacobi-Davidson method, where equivalences between Newton's method, inverse iteration and the Jacobi-Davidson method are exploited. To improve the efficiency of the inner iterative solves we introduce a new tuning strategy which can be applied to any standard preconditioner. We give a detailed analysis on this new preconditioning idea and show how the number of iterations for the inner iterative method and hence the total number of iterations can be reduced significantly by the application of this tuning strategy. The analysis of the tuned preconditioner is carried out for both Hermitian and non-Hermitian eigenproblems. We show how the preconditioner can be implemented efficiently and illustrate its performance using various numerical examples. An equivalence result between the preconditioned simplified Jacobi-Davidson method and inexact inverse iteration with the tuned preconditioner is given. Finally, we discuss the shift-invert Arnoldi method both in the standard and restarted fashion. First, existing relaxation strategies for the outer iterative solves are extended to implicitly restarted Arnoldi's method. Second, we apply the idea of tuning the preconditioner to the inner iterative solve. As for inexact inverse iteration the tuned preconditioner for inexact Arnoldi's method is shown to provide significant savings in the number of inner solves. The theory in this thesis is supported by many numerical examples.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    An Ensemble-Based Projection Method and Its Numerical Investigation

    Get PDF
    In many cases, partial differential equation (PDE) models involve a set of parameters whose values may vary over a wide range in application problems, such as optimization, control and uncertainty quantification. Performing multiple numerical simulations in large-scale settings often leads to tremendous demands on computational resources. Thus, the ensemble method has been developed for accelerating a sequence of numerical simulations. In this work we first consider numerical solutions of Navier-Stokes equations under different conditions and introduce the ensemblebased projection method to reduce the computational cost. In particular, we incorporate a sparse grad-div stabilization into the method as a nonzero penalty term in discretization that does not strongly enforce mass conservation, and derive the long time stability and the error estimate. Numerical tests are presented to illustrate the theoretical results. A simple way to solve the linear system generated in the ensemble method is to use a direct solver. Compared with individual simulations of the same problems, the ensemble method is more efficient because there is only one linear system needs to solve for the ensemble. However, for large-scale problems, iterative linear solvers have to be used. Therefore, in the second part of this work we investigate numerical performance of the ensemble method with block iterative solvers for two typical evolution problems: the heat equation and the Navier-Stokes equations. Numerical results are provided to demonstrate the effectiveness and efficiency of the ensemble method when working together with the block iterative solvers
    • …
    corecore