183 research outputs found

    Implicitly Restarted Generalized Second-order Arnoldi Type Algorithms for the Quadratic Eigenvalue Problem

    Full text link
    We investigate the generalized second-order Arnoldi (GSOAR) method, a generalization of the SOAR method proposed by Bai and Su [{\em SIAM J. Matrix Anal. Appl.}, 26 (2005): 640--659.], and the Refined GSOAR (RGSOAR) method for the quadratic eigenvalue problem (QEP). The two methods use the GSOAR procedure to generate an orthonormal basis of a given generalized second-order Krylov subspace, and with such basis they project the QEP onto the subspace and compute the Ritz pairs and the refined Ritz pairs, respectively. We develop implicitly restarted GSOAR and RGSOAR algorithms, in which we propose certain exact and refined shifts for respective use within the two algorithms. Numerical experiments on real-world problems illustrate the efficiency of the restarted algorithms and the superiority of the restarted RGSOAR to the restarted GSOAR. The experiments also demonstrate that both IGSOAR and IRGSOAR generally perform much better than the implicitly restarted Arnoldi method applied to the corresponding linearization problems, in terms of the accuracy and the computational efficiency.Comment: 30 pages, 6 figure

    Augmented Block Householder Arnoldi Method

    Get PDF
    AbstractComputing the eigenvalues and eigenvectors of a large sparse nonsymmetric matrix arises in many applications and can be a very computationally challenging problem. In this paper we propose the Augmented Block Householder Arnoldi (ABHA) method that combines the advantages of a block routine with an augmented Krylov routine. A public domain MATLAB code ahbeigs has been developed and numerical experiments indicate that the code is competitive with other publicly available codes

    On Inner Iterations in the Shift-Invert Residual Arnoldi Method and the Jacobi--Davidson Method

    Full text link
    Using a new analysis approach, we establish a general convergence theory of the Shift-Invert Residual Arnoldi (SIRA) method for computing a simple eigenvalue nearest to a given target σ\sigma and the associated eigenvector. In SIRA, a subspace expansion vector at each step is obtained by solving a certain inner linear system. We prove that the inexact SIRA method mimics the exact SIRA well, that is, the former uses almost the same outer iterations to achieve the convergence as the latter does if all the inner linear systems are iteratively solved with {\em low} or {\em modest} accuracy during outer iterations. Based on the theory, we design practical stopping criteria for inner solves. Our analysis is on one step expansion of subspace and the approach applies to the Jacobi--Davidson (JD) method with the fixed target σ\sigma as well, and a similar general convergence theory is obtained for it. Numerical experiments confirm our theory and demonstrate that the inexact SIRA and JD are similarly effective and are considerably superior to the inexact SIA.Comment: 20 pages, 8 figure
    corecore