591 research outputs found
On the Convergence of Ritz Pairs and Refined Ritz Vectors for Quadratic Eigenvalue Problems
For a given subspace, the Rayleigh-Ritz method projects the large quadratic
eigenvalue problem (QEP) onto it and produces a small sized dense QEP. Similar
to the Rayleigh-Ritz method for the linear eigenvalue problem, the
Rayleigh-Ritz method defines the Ritz values and the Ritz vectors of the QEP
with respect to the projection subspace. We analyze the convergence of the
method when the angle between the subspace and the desired eigenvector
converges to zero. We prove that there is a Ritz value that converges to the
desired eigenvalue unconditionally but the Ritz vector converges conditionally
and may fail to converge. To remedy the drawback of possible non-convergence of
the Ritz vector, we propose a refined Ritz vector that is mathematically
different from the Ritz vector and is proved to converge unconditionally. We
construct examples to illustrate our theory.Comment: 20 page
Jacobi-Davidson methods for polynomial two-parameter eigenvalue problems
We propose Jacobi-Davidson type methods for polynomial two-parameter eigenvalue problems (PMEP). Such problems can be linearized as singular two-parameter eigenvalue problems, whose matrices are of dimension k(k+1)n/2, where k is the degree of the polynomial and n is the size of the matrix coefficients in the PMEP. When k^2n is relatively small, the problem can be solved numerically by computing the common regular part of the related pair of singular pencils. For large k^2n, computing all solutions is not feasible and iterative methods are required. When k is large, we propose to linearize the problem first and then apply Jacobi-Davidson to the obtained singular two-parameter eigenvalue problem. The resulting method may for instance be used for computing zeros of a system of scalar bivariate polynomials close to a given target. On the other hand, when k is small, we can apply a Jacobi-Davidson type approach directly to the original matrices. The original matrices are projected onto a low-dimensional subspace, and the projected polynomial two-parameter eigenvalue problems are solved by a linearization. Keywords: Polynomial two-parameter eigenvalue problem (PMEP), quadratic two-parameter eigenvalue problem (QMEP), Jacobi-Davidson, correction equation, singular generalized eigenvalue problem, bivariate polynomial equations, determinantal representation, delay differential equations (DDEs), critical delays
Preconditioned Locally Harmonic Residual Method for Computing Interior Eigenpairs of Certain Classes of Hermitian Matrices
We propose a Preconditioned Locally Harmonic Residual (PLHR) method for
computing several interior eigenpairs of a generalized Hermitian eigenvalue
problem, without traditional spectral transformations, matrix factorizations,
or inversions. PLHR is based on a short-term recurrence, easily extended to a
block form, computing eigenpairs simultaneously. PLHR can take advantage of
Hermitian positive definite preconditioning, e.g., based on an approximate
inverse of an absolute value of a shifted matrix, introduced in [SISC, 35
(2013), pp. A696-A718]. Our numerical experiments demonstrate that PLHR is
efficient and robust for certain classes of large-scale interior eigenvalue
problems, involving Laplacian and Hamiltonian operators, especially if memory
requirements are tight
Implicitly Restarted Generalized Second-order Arnoldi Type Algorithms for the Quadratic Eigenvalue Problem
We investigate the generalized second-order Arnoldi (GSOAR) method, a
generalization of the SOAR method proposed by Bai and Su [{\em SIAM J. Matrix
Anal. Appl.}, 26 (2005): 640--659.], and the Refined GSOAR (RGSOAR) method for
the quadratic eigenvalue problem (QEP). The two methods use the GSOAR procedure
to generate an orthonormal basis of a given generalized second-order Krylov
subspace, and with such basis they project the QEP onto the subspace and
compute the Ritz pairs and the refined Ritz pairs, respectively. We develop
implicitly restarted GSOAR and RGSOAR algorithms, in which we propose certain
exact and refined shifts for respective use within the two algorithms.
Numerical experiments on real-world problems illustrate the efficiency of the
restarted algorithms and the superiority of the restarted RGSOAR to the
restarted GSOAR. The experiments also demonstrate that both IGSOAR and IRGSOAR
generally perform much better than the implicitly restarted Arnoldi method
applied to the corresponding linearization problems, in terms of the accuracy
and the computational efficiency.Comment: 30 pages, 6 figure
- …