81 research outputs found
On pole-swapping algorithms for the eigenvalue problem
Pole-swapping algorithms, which are generalizations of the QZ algorithm for
the generalized eigenvalue problem, are studied. A new modular (and therefore
more flexible) convergence theory that applies to all pole-swapping algorithms
is developed. A key component of all such algorithms is a procedure that swaps
two adjacent eigenvalues in a triangular pencil. An improved swapping routine
is developed, and its superiority over existing methods is demonstrated by a
backward error analysis and numerical tests. The modularity of the new
convergence theory and the generality of the pole-swapping approach shed new
light on bi-directional chasing algorithms, optimally packed shifts, and bulge
pencils, and allow the design of novel algorithms
Computing approximate (block) rational Krylov subspaces without explicit inversion with extensions to symmetric matrices
It has been shown that approximate extended Krylov subspaces can be computed, under certain assumptions, without any explicit inversion or system solves. Instead, the vectors spanning the extended Krylov space are retrieved in an implicit way, via unitary similarity transformations, from an enlarged Krylov subspace. In this paper this approach is generalized to rational Krylov subspaces, which aside from poles at infinity and zero, also contain finite non-zero poles. Furthermore, the algorithms are generalized to deal with block rational Krylov subspaces and techniques to exploit the symmetry when working with Hermitian matrices are also presented. For each variant of the algorithm numerical experiments illustrate the power of the new approach. The experiments involve matrix functions, Ritz-value computations, and the solutions of matrix equations
Fast and backward stable computation of roots of polynomials
A stable algorithm to compute the roots of polynomials is presented. The roots are found by computing the eigenvalues of the associated companion matrix by Francis's implicitly shifted QR algorithm. A companion matrix is an upper Hessenberg matrix that is unitary-plus-rankone, that is, it is the sum of a unitary matrix and a rank-one matrix. These properties are preserved by iterations of Francis's algorithm, and it is these properties that are exploited here. The matrix is represented as a product of 3n - 1 Givens rotators plus the rank-one part, so only O(n) storage space is required. In fact, the information about the rank-one part is also encoded in the rotators, so it is not necessary to store the rank-one part explicitly. Francis's algorithm implemented on this representation requires only O(n) flops per iteration and thus O(n2) flops overall. The algorithm is described, normwise backward stability is proved, and an extensive set of numerical experiments is presented. The algorithm is shown to be about as accurate as the (slow) Francis QR algorithm applied to the companion matrix without exploiting the structure. It is faster than other fast methods that have been proposed, and its accuracy is comparable or better
Recommended from our members
Yet another algorithm for the symmetric eigenvalue problem
In this paper we present a new algorithm for solving the symmetric matrix eigenvalue problem that
works by first using a Cayley transformation to convert the symmetric matrix into a unitary one and then uses
Gragg’s implicitly shifted unitary QR algorithm to solve the resulting unitary eigenvalue problem. We prove that
under reasonable assumptions on the symmetric matrix this algorithm is backward stable and also demonstrate that
this algorithm is comparable with other well known implementations in terms of both speed and accuracy
Fast and stable unitary QR algorithm
A fast Fortran implementation of a variant of Gragg's unitary Hessenberg QR algorithm is presented. It is proved, moreover, that all QR- And QZ-like algorithms for the unitary eigenvalue problems are equivalent. The algorithm is backward stable. Numerical experiments are presented that confirm the backward stability and compare the speed and accuracy of this algorithm with other methods
Fast and backward stable computation of eigenvalues and eigenvectors of matrix polynomials
In the last decade matrix polynomials have been investigated with the primary focus on adequate linearizations and good scaling techniques for computing their eigenvalues and eigenvectors. In this article we propose a new method for computing a factored Schur form of the associated companion pencil. The algorithm has a quadratic cost in the degree of the polynomial and a cubic one in the size of the coefficient matrices. Also the eigenvectors can be computed at the same cost. The algorithm is a variant of Francis's implicitly shifted QR algorithm applied on the companion pencil. A preprocessing unitary equivalence is executed on the matrix polynomial to simultaneously bring the leading matrix coefficient and the constant matrix term to triangular form before forming the companion pencil. The resulting structure allows us to stably factor each matrix of the pencil as a product of k matrices of unitary-plus-rank-one form, admitting cheap and numerically reliable storage. The problem is then solved as a product core chasing eigenvalue problem. A backward error analysis is included, implying normwise backward stability after a proper scaling. Computing the eigenvectors via reordering the Schur form is discussed as well. Numerical experiments illustrate stability and efficiency of the proposed methods
Roots of Polynomials: on twisted QR methods for companion matrices and pencils
Two generalizations of the companion QR algorithm by J.L. Aurentz, T. Mach, R. Vandebril, and D.S. Watkins, SIAM Journal on Matrix Analysis and Applications, 36(3): 942--973, 2015, to compute the roots of a polynomial are presented. First, we will show how the fast and backward stable QR algorithm for companion matrices can be generalized to a QZ algorithm for companion pencils. Companion pencils admit a greater flexibility in scaling the polynomial and distributing the matrix coefficients over both matrices in the pencil. This allows for an enhanced stability for polynomials with largely varying coefficients. Second, we will generalize the pencil approach further to a twisted QZ algorithm. Whereas in the classical QZ case Krylov spaces govern the convergence, the convergence of the twisted case is determined by a rational Krylov space. A backward error analysis to map the error back to the original pencil and to the polynomial coefficients shows that in both cases the error scales quadratically with the input. An extensive set of numerical experiments supports the theoretical backward error, confirms the numerical stability and shows that the computing time depends quadratically on the problem size
Fast and backward stable computation of the eigenvalues of matrix polynomials
In the last decade matrix polynomials have been investigated with the primary focus on adequate linearizations and good scaling techniques for computing their eigenvalues. In this article we propose a new backward stable method for computing a factored Schur form of the associated companion pencil. The algorithm has a quadratic cost in the degree of the polynomial and a cubic one in the size of the coefficient matrices. The algorithm is a variant of Francis's implicitly shifted QR algorithm applied on the associated companion pencil. A preprocessing unitary equivalence is executed on the matrix polynomial to simultaneously bring the leading matrix coefficient and the constant matrix term to triangular form before forming the companion pencil. The resulting structure allows us to stably factor both matrices of the pencil into matrices which are of unitary-plus-rank-one form admitting cheap and numerically reliable storage. The problem is then solved as a product core chasing eigenvalue problem. The numerical experiments illustrate stability and efficiency of the proposed methods
- …