185 research outputs found
On Inner Iterations in the Shift-Invert Residual Arnoldi Method and the Jacobi--Davidson Method
Using a new analysis approach, we establish a general convergence theory of
the Shift-Invert Residual Arnoldi (SIRA) method for computing a simple
eigenvalue nearest to a given target and the associated eigenvector.
In SIRA, a subspace expansion vector at each step is obtained by solving a
certain inner linear system. We prove that the inexact SIRA method mimics the
exact SIRA well, that is, the former uses almost the same outer iterations to
achieve the convergence as the latter does if all the inner linear systems are
iteratively solved with {\em low} or {\em modest} accuracy during outer
iterations. Based on the theory, we design practical stopping criteria for
inner solves. Our analysis is on one step expansion of subspace and the
approach applies to the Jacobi--Davidson (JD) method with the fixed target
as well, and a similar general convergence theory is obtained for it.
Numerical experiments confirm our theory and demonstrate that the inexact SIRA
and JD are similarly effective and are considerably superior to the inexact
SIA.Comment: 20 pages, 8 figure
A FEAST SVDsolver based on Chebyshev--Jackson series for computing partial singular triplets of large matrices
The FEAST eigensolver is extended to the computation of the singular triplets
of a large matrix with the singular values in a given interval. The
resulting FEAST SVDsolver is subspace iteration applied to an approximate
spectral projector of corresponding to the desired singular values in a
given interval, and constructs approximate left and right singular subspaces
corresponding to the desired singular values, onto which is projected to
obtain Ritz approximations. Differently from a commonly used contour
integral-based FEAST solver, we propose a robust alternative that constructs
approximate spectral projectors by using the Chebyshev--Jackson polynomial
series, which are symmetric positive semi-definite with the eigenvalues in
. We prove the pointwise convergence of this series and give compact
estimates for pointwise errors of it and the step function that corresponds to
the exact spectral projector. We present error bounds for the approximate
spectral projector and reliable estimates for the number of desired singular
triplets, establish numerous convergence results on the resulting FEAST
SVDsolver, and propose practical selection strategies for determining the
series degree and for reliably determining the subspace dimension. The solver
and results on it are directly applicable or adaptable to the real symmetric
and complex Hermitian eigenvalue problem. Numerical experiments illustrate that
our FEAST SVDsolver is at least competitive with and is much more efficient
than the contour integral-based FEAST SVDsolver when the desired singular
values are extreme and interior ones, respectively, and it is also more robust
than the latter.Comment: 33, 5 figure
Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning
Many methods for computing eigenvalues of a large sparse matrix involve shift-invert transformations which require the solution of a shifted linear system at each step. This thesis deals with shift-invert iterative techniques for solving eigenvalue problems where the arising linear systems are solved inexactly using a second iterative technique. This approach leads to an inner-outer type algorithm. We provide convergence results for the outer iterative eigenvalue computation as well as techniques for efficient inner solves. In particular eigenvalue computations using inexact inverse iteration, the Jacobi-Davidson method without subspace expansion and the shift-invert Arnoldi method as a subspace method are investigated in detail. A general convergence result for inexact inverse iteration for the non-Hermitian generalised eigenvalue problem is given, using only minimal assumptions. This convergence result is obtained in two different ways; on the one hand, we use an equivalence result between inexact inverse iteration applied to the generalised eigenproblem and modified Newton's method; on the other hand, a splitting method is used which generalises the idea of orthogonal decomposition. Both approaches also include an analysis for the convergence theory of a version of inexact Jacobi-Davidson method, where equivalences between Newton's method, inverse iteration and the Jacobi-Davidson method are exploited. To improve the efficiency of the inner iterative solves we introduce a new tuning strategy which can be applied to any standard preconditioner. We give a detailed analysis on this new preconditioning idea and show how the number of iterations for the inner iterative method and hence the total number of iterations can be reduced significantly by the application of this tuning strategy. The analysis of the tuned preconditioner is carried out for both Hermitian and non-Hermitian eigenproblems. We show how the preconditioner can be implemented efficiently and illustrate its performance using various numerical examples. An equivalence result between the preconditioned simplified Jacobi-Davidson method and inexact inverse iteration with the tuned preconditioner is given. Finally, we discuss the shift-invert Arnoldi method both in the standard and restarted fashion. First, existing relaxation strategies for the outer iterative solves are extended to implicitly restarted Arnoldi's method. Second, we apply the idea of tuning the preconditioner to the inner iterative solve. As for inexact inverse iteration the tuned preconditioner for inexact Arnoldi's method is shown to provide significant savings in the number of inner solves. The theory in this thesis is supported by many numerical examples.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Improving Efficiency of Rational Krylov Subspace Methods
This thesis studies two classes of numerical linear algebra problems, approximating the product of a function of a matrix with a vector, and solving the linear eigenvalue problem for a small number of eigenvalues. These problems are solved by rational Krylov subspace methods (RKSM). We present several improvements in two directions: pole selection and applying inexact methods.
In Chapter 3, a flexible extended Krylov subspace method (-EKSM) is considered for numerical approximation of the action of a matrix function to a vector , where the function is of Markov type. -EKSM has the same framework as the extended Krylov subspace method (EKSM), replacing the zero pole in EKSM with a properly chosen fixed nonzero poles. For symmetric positive definite matrices, the optimal fixed pole is derived for -EKSM to achieve the lowest possible upper bound on the asymptotic convergence factor, which is lower than that of EKSM. The analysis is based on properties of Faber polynomials of and . For large and sparse matrices that can be handled efficiently by LU factorizations, numerical experiments show that -EKSM and a variant of RKSM based on a small number of fixed poles outperform EKSM in both storage and runtime, and they usually have advantage over adaptive RKSM in runtime.
Chapter 4 concerns the theory and development of inexact RKSM for approximating the action of a function of matrix to a column vector . At each step of RKSM, a shifted linear system of equations needs to be solved to enlarge the subspace. For large-scale problems, arising from discretizations of PDEs in 3D domains, such a linear system is usually solved by an iterative method approximately. The main question is how to relax the accuracy of these linear solves without negatively affecting the convergence for approximating . Our insight into this issue is obtained by exploring the residual bounds on the rational Krylov subspace approximations to , based on the decaying behavior of the entries in the first column of the matrix function of the block Rayleigh quotient of with respect to the rational Krylov subspaces. The decay bounds on these entries for both analytic functions and Markov functions can be efficiently and accurately evaluated by appropriate quadrature rules. A heuristic based on these bounds is proposed to relax the tolerances of the linear solves arising from each step of RKSM. As the algorithm progresses toward convergence, the linear solves can be performed with increasingly lower accuracy and computational cost. Numerical experiments for large nonsymmetric matrices show the effectiveness of the tolerance relaxation strategy for the inexact linear solves of RKSM.
In Chapter 5, inexact RKSM are studied to solve large-scale nonsymmetric eigenvalue problems. Similar to the problem setting in Chapter 4, each iteration (outer step) of RKSM requires solution to a shifted linear system to enlarge the subspace, but these linear solves by direct methods are prohibitive due to the problem scale. Errors are introduced at each outer step if these linear systems are solved approximately by iterative methods (inner step), and these errors accumulate in the rational Krylov subspace. In this thesis, we derive an upper bound on the errors that can be introduced at each outer step to maintain the same convergence as exact RKSM for approximating an invariant subspace. Since this bound is inversely proportional to the current eigenresidual norm of the desired invariant subspace, the tolerance of iterative linear solves at each outer step can be relaxed with the outer iteration progress. A restarted variant of the inexact RKSM is also proposed. Numerical experiments show the effectiveness of relaxing the inner tolerance to save computational cost
Computing Singular Values of Large Matrices with an Inverse-Free Preconditioned Krylov Subspace Method
We present an efficient algorithm for computing a few extreme singular values of a large sparse m×n matrix C. Our algorithm is based on reformulating the singular value problem as an eigenvalue problem for CTC. To address the clustering of the singular values, we develop an inverse-free preconditioned Krylov subspace method to accelerate convergence. We consider preconditioning that is based on robust incomplete factorizations, and we discuss various implementation issues. Extensive numerical tests are presented to demonstrate efficiency and robustness of the new algorithm
- …