262 research outputs found

    Shift for nonsymmetric generalised eigenvalue problems

    Get PDF

    Improving Efficiency of Rational Krylov Subspace Methods

    Get PDF
    This thesis studies two classes of numerical linear algebra problems, approximating the product of a function of a matrix with a vector, and solving the linear eigenvalue problem Av=λBvAv=\lambda Bv for a small number of eigenvalues. These problems are solved by rational Krylov subspace methods (RKSM). We present several improvements in two directions: pole selection and applying inexact methods. In Chapter 3, a flexible extended Krylov subspace method (F\mathcal{F}-EKSM) is considered for numerical approximation of the action of a matrix function f(A)f(A) to a vector bb, where the function ff is of Markov type. F\mathcal{F}-EKSM has the same framework as the extended Krylov subspace method (EKSM), replacing the zero pole in EKSM with a properly chosen fixed nonzero poles. For symmetric positive definite matrices, the optimal fixed pole is derived for F\mathcal{F}-EKSM to achieve the lowest possible upper bound on the asymptotic convergence factor, which is lower than that of EKSM. The analysis is based on properties of Faber polynomials of AA and (I−A/s)−1(I-A/s)^{-1}. For large and sparse matrices that can be handled efficiently by LU factorizations, numerical experiments show that F\mathcal{F}-EKSM and a variant of RKSM based on a small number of fixed poles outperform EKSM in both storage and runtime, and they usually have advantage over adaptive RKSM in runtime. Chapter 4 concerns the theory and development of inexact RKSM for approximating the action of a function of matrix f(A)f(A) to a column vector bb. At each step of RKSM, a shifted linear system of equations needs to be solved to enlarge the subspace. For large-scale problems, arising from discretizations of PDEs in 3D domains, such a linear system is usually solved by an iterative method approximately. The main question is how to relax the accuracy of these linear solves without negatively affecting the convergence for approximating f(A)bf(A)b. Our insight into this issue is obtained by exploring the residual bounds on the rational Krylov subspace approximations to f(A)bf(A)b, based on the decaying behavior of the entries in the first column of the matrix function of the block Rayleigh quotient of AA with respect to the rational Krylov subspaces. The decay bounds on these entries for both analytic functions and Markov functions can be efficiently and accurately evaluated by appropriate quadrature rules. A heuristic based on these bounds is proposed to relax the tolerances of the linear solves arising from each step of RKSM. As the algorithm progresses toward convergence, the linear solves can be performed with increasingly lower accuracy and computational cost. Numerical experiments for large nonsymmetric matrices show the effectiveness of the tolerance relaxation strategy for the inexact linear solves of RKSM. In Chapter 5, inexact RKSM are studied to solve large-scale nonsymmetric eigenvalue problems. Similar to the problem setting in Chapter 4, each iteration (outer step) of RKSM requires solution to a shifted linear system to enlarge the subspace, but these linear solves by direct methods are prohibitive due to the problem scale. Errors are introduced at each outer step if these linear systems are solved approximately by iterative methods (inner step), and these errors accumulate in the rational Krylov subspace. In this thesis, we derive an upper bound on the errors that can be introduced at each outer step to maintain the same convergence as exact RKSM for approximating an invariant subspace. Since this bound is inversely proportional to the current eigenresidual norm of the desired invariant subspace, the tolerance of iterative linear solves at each outer step can be relaxed with the outer iteration progress. A restarted variant of the inexact RKSM is also proposed. Numerical experiments show the effectiveness of relaxing the inner tolerance to save computational cost

    Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

    Get PDF
    Many methods for computing eigenvalues of a large sparse matrix involve shift-invert transformations which require the solution of a shifted linear system at each step. This thesis deals with shift-invert iterative techniques for solving eigenvalue problems where the arising linear systems are solved inexactly using a second iterative technique. This approach leads to an inner-outer type algorithm. We provide convergence results for the outer iterative eigenvalue computation as well as techniques for efficient inner solves. In particular eigenvalue computations using inexact inverse iteration, the Jacobi-Davidson method without subspace expansion and the shift-invert Arnoldi method as a subspace method are investigated in detail. A general convergence result for inexact inverse iteration for the non-Hermitian generalised eigenvalue problem is given, using only minimal assumptions. This convergence result is obtained in two different ways; on the one hand, we use an equivalence result between inexact inverse iteration applied to the generalised eigenproblem and modified Newton's method; on the other hand, a splitting method is used which generalises the idea of orthogonal decomposition. Both approaches also include an analysis for the convergence theory of a version of inexact Jacobi-Davidson method, where equivalences between Newton's method, inverse iteration and the Jacobi-Davidson method are exploited. To improve the efficiency of the inner iterative solves we introduce a new tuning strategy which can be applied to any standard preconditioner. We give a detailed analysis on this new preconditioning idea and show how the number of iterations for the inner iterative method and hence the total number of iterations can be reduced significantly by the application of this tuning strategy. The analysis of the tuned preconditioner is carried out for both Hermitian and non-Hermitian eigenproblems. We show how the preconditioner can be implemented efficiently and illustrate its performance using various numerical examples. An equivalence result between the preconditioned simplified Jacobi-Davidson method and inexact inverse iteration with the tuned preconditioner is given. Finally, we discuss the shift-invert Arnoldi method both in the standard and restarted fashion. First, existing relaxation strategies for the outer iterative solves are extended to implicitly restarted Arnoldi's method. Second, we apply the idea of tuning the preconditioner to the inner iterative solve. As for inexact inverse iteration the tuned preconditioner for inexact Arnoldi's method is shown to provide significant savings in the number of inner solves. The theory in this thesis is supported by many numerical examples.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore