7 research outputs found

    Theoretical and Computable Optimal Subspace Expansions for Matrix Eigenvalue Problems

    Full text link
    Consider the optimal subspace expansion problem for the matrix eigenvalue problem Ax=Ξ»xAx=\lambda x: {\em Which vector woptw_{opt} in the current subspace V\mathcal{V}, after multiplied by AA, provides an optimal subspace expansion for approximating a desired eigenvector xx in the sense that xx has the smallest angle with the expanded subspace Vw=V+span{Aw}\mathcal{V}_w=\mathcal{V}+{\rm span}\{Aw\}, i.e., wopt=arg⁑max⁑w∈Vcos⁑∠(Vw,x)w_{opt}=\arg\max_{w\in\mathcal{V}}\cos\angle(\mathcal{V}_w,x)}? This problem is important as many iterative methods construct nested subspaces that successively expand V\mathcal{V} to Vw\mathcal{V}_w. Ye ({\em Linear Algebra Appl.}, 428 (2008), p. 911--918) derives an expression of woptw_{opt} for AA general, but it could not be exploited to construct a computable (nearly) optimally expanded subspace. He turns to deriving a maximization characterization of cos⁑∠(Vw,x)\cos\angle(\mathcal{V}_w,x) for a {\em given} w∈Vw\in \mathcal{V} when AA is Hermitian, but his proof and analysis cannot extend to the non-Hermitian case. We generalize Ye's maximization characterization to the general case and find its maximizer. Our main contributions consist of explicit expressions of woptw_{opt}, (Iβˆ’PV)Awopt(I-P_V)Aw_{opt} and the optimally expanded subspace Vwopt\mathcal{V}_{w_{opt}} for AA general, where PVP_V is the orthogonal projector onto V\mathcal{V}. These results can be fully exploited to obtain computable optimally expanded subspaces Vw~opt\mathcal{V}_{\widetilde{w}_{opt}} within the framework of the standard, harmonic, refined, and refined harmonic Rayleigh--Ritz methods.Comment: 20 pages, 3 figure

    Harmonic and Refined Harmonic Shift-Invert Residual Arnoldi and Jacobi--Davidson Methods for Interior Eigenvalue Problems

    Full text link
    This paper concerns the harmonic shift-invert residual Arnoldi (HSIRA) and Jacobi--Davidson (HJD) methods as well as their refined variants RHSIRA and RHJD for the interior eigenvalue problem. Each method needs to solve an inner linear system to expand the subspace successively. When the linear systems are solved only approximately, we are led to the inexact methods. We prove that the inexact HSIRA, RHSIRA, HJD and RHJD methods mimic their exact counterparts well when the inner linear systems are solved with only low or modest accuracy. We show that (i) the exact HSIRA and HJD expand subspaces better than the exact SIRA and JD and (ii) the exact RHSIRA and RHJD expand subspaces better than the exact HSIRA and HJD. Based on the theory, we design stopping criteria for inner solves. To be practical, we present restarted HSIRA, HJD, RHSIRA and RHJD algorithms. Numerical results demonstrate that these algorithms are much more efficient than the restarted standard SIRA and JD algorithms and furthermore the refined harmonic algorithms outperform the harmonic ones very substantially.Comment: 15 pages, 4 figure

    On Inner Iterations in the Shift-Invert Residual Arnoldi Method and the Jacobi--Davidson Method

    Full text link
    Using a new analysis approach, we establish a general convergence theory of the Shift-Invert Residual Arnoldi (SIRA) method for computing a simple eigenvalue nearest to a given target Οƒ\sigma and the associated eigenvector. In SIRA, a subspace expansion vector at each step is obtained by solving a certain inner linear system. We prove that the inexact SIRA method mimics the exact SIRA well, that is, the former uses almost the same outer iterations to achieve the convergence as the latter does if all the inner linear systems are iteratively solved with {\em low} or {\em modest} accuracy during outer iterations. Based on the theory, we design practical stopping criteria for inner solves. Our analysis is on one step expansion of subspace and the approach applies to the Jacobi--Davidson (JD) method with the fixed target Οƒ\sigma as well, and a similar general convergence theory is obtained for it. Numerical experiments confirm our theory and demonstrate that the inexact SIRA and JD are similarly effective and are considerably superior to the inexact SIA.Comment: 20 pages, 8 figure
    corecore