10 research outputs found

    Solving polynomial eigenvalue problems by means of the Ehrlich-Aberth method

    Full text link
    Given the n×nn\times n matrix polynomial P(x)=∑i=0kPixiP(x)=\sum_{i=0}^kP_i x^i, we consider the associated polynomial eigenvalue problem. This problem, viewed in terms of computing the roots of the scalar polynomial det⁡P(x)\det P(x), is treated in polynomial form rather than in matrix form by means of the Ehrlich-Aberth iteration. The main computational issues are discussed, namely, the choice of the starting approximations needed to start the Ehrlich-Aberth iteration, the computation of the Newton correction, the halting criterion, and the treatment of eigenvalues at infinity. We arrive at an effective implementation which provides more accurate approximations to the eigenvalues with respect to the methods based on the QZ algorithm. The case of polynomials having special structures, like palindromic, Hamiltonian, symplectic, etc., where the eigenvalues have special symmetries in the complex plane, is considered. A general way to adapt the Ehrlich-Aberth iteration to structured matrix polynomial is introduced. Numerical experiments which confirm the effectiveness of this approach are reported.Comment: Submitted to Linear Algebra App

    The Ehrlich-Aberth method for palindromic matrix polynomials represented in the Dickson basis

    Full text link
    An algorithm based on the Ehrlich-Aberth root-finding method is presented for the computation of the eigenvalues of a T-palindromic matrix polynomial. A structured linearization of the polynomial represented in the Dickson basis is introduced in order to exploit the symmetry of the roots by halving the total number of the required approximations. The rank structure properties of the linearization allow the design of a fast and numerically robust implementation of the root-finding iteration. Numerical experiments that confirm the effectiveness and the robustness of the approach are provided.Comment: in press in Linear Algebra Appl. (2011

    Preconditioning For Matrix Computation

    Full text link
    Preconditioning is a classical subject of numerical solution of linear systems of equations. The goal is to turn a linear system into another one which is easier to solve. The two central subjects of numerical matrix computations are LIN-SOLVE, that is, the solution of linear systems of equations and EIGEN-SOLVE, that is, the approximation of the eigenvalues and eigenvectors of a matrix. We focus on the former subject of LIN-SOLVE and show an application to EIGEN-SOLVE. We achieve our goal by applying randomized additive and multiplicative preconditioning. We facilitate the numerical solution by decreasing the condition of the coefficient matrix of the linear system, which enables reliable numerical solution of LIN-SOLVE. After the introduction in the Chapter 1 we recall the definitions and auxiliary results in Chapter 2. Then in Chapter 3 we precondition linear systems of equations solved at every iteration of the Inverse Power Method applied to EIGEN-SOLVE. These systems are ill conditioned, that is, have large condition numbers, and we decrease them by applying randomized additive preconditioning. This is our first subject. Our second subject is randomized multiplicative preconditioning for LIN-SOLVE. In this way we support application of GENP, that is, Gaussian elimination with no pivoting, and block Gaussian elimination. We prove that the proposed preconditioning methods are efficient when we apply Gaussian random matrices as preconditioners. We confirm these results with our extensive numerical tests. The tests also show that the same methods work as efficiently on the average when we use random structured, in particular circulant, preconditioners instead, but we show both formally and experimentally that these preconditioners fail in the case of LIN-SOLVE for the unitary matrix of discreet Fourier transform, for which Gaussian preconditioners work efficiently

    The Ehrlich--Aberth Method for the Nonsymmetric Tridiagonal Eigenvalue Problem

    No full text
    An algorithm based on the Ehrlich--Aberth iteration is presented for the computation of the zeros of p(λ)=det⁥(T−λI)p(\lambda)=\det(T-\lambda I), where TT is a real irreducible nonsymmetric tridiagonal matrix. The algorithm requires the evaluation of p(λ)/pâ€Č(λ)=−1/trace(T−λI)−1p(\lambda)/p'(\lambda)=-1/\mathrm{trace}(T-\lambda I)^{-1}, which is done by exploiting the QR factorization of T−λIT-\lambda I and the semiseparable structure of (T−λI)−1(T-\lambda I)^{-1}. The choice of initial approximations relies on a divide-and-conquer strategy, and some results motivating this strategy are given. Guaranteed a posteriori error bounds based on a running error analysis are proved. A Fortran 95 module implementing the algorithm is provided and numerical experiments that confirm the effectiveness and the robustness of the approach are presented. In particular, comparisons with the LAPACK subroutine \texttt{dhseqr} show that our algorithm is faster for large dimensions

    The Ehrlich-Aberth method for the nonsymmetric tridiagonal eigenvalue problem

    No full text

    The Ehrlich-Aberth Method For The Nonsymmetric Tridiagonal Eigenvalue Problem

    No full text
    An algorithm based on the Ehrlich-Aberth iteration is presented for the computation of the zeros of p(#) = det(T - #I), where T is an irreducible tridiagonal matrix. The algorithm requires the evaluation of p(#)/p (#) = -1/trace(T - #I) , which is done here by exploiting the QR factorization of T - #I and the semiseparable structure of (T - #I) . Two choices of the initial approximations are considered; the most effective relies on a divide-and-conquer strategy, and some results motivating this strategy are given. A Fortran 95 module implementing the algorithm is provided and numerical experiments that confirm the e#ectiveness and the robustness of the approach are presented. In particular, comparisons with the LAPACK subroutine dhseqr show that our algorithm is faster for large dimensions

    Polynomial Eigenproblems: a Root-Finding Approach

    Get PDF
    A matrix polynomial, also known as a polynomial matrix, is a polynomial whose coefficients are matrices; or, equivalently, a matrix whose elements are polynomials. If the matrix polynomial P(x) is regular, that is if p(x):=det(P(x)) is not identically zero, the polynomial eigenvalue problem associated with P(x) is equivalent to the computation of the roots of the polynomial p(x); such roots are called the eigenvalues of the regular matrix polynomial P(x). Sometimes, one is also interested in computing the corresponding (left and right) eigenvectors. Recently, much literature has been addressed to the polynomial eigenvalue problem. This line of research is currently very active: the theoretical properties of PEPs are studied, and fast and numerically stable methods are sought for their numerical solution. The most commonly encountered case is the one of degree 2 polynomials, but there exist applications where higher degree polynomials appear. More generally, PEPs are special cases belonging to the wider class of nonlinear eigenvalue problems. Amongst nonlinear eigenvalue problems, rational eigenvalue problems can be immediately brought to polynomial form, multiplying them by their least common denominator; truly nonlinear eigenvalue problems may be approximated with PEPs, truncating some matrix power series, or with rational eigenproblems, using rational approximants such as Padé approximants. To approximate numerically the solutions of PEPs, several algorithms have been introduced based on the technique of linearization where the polynomial problem is replaced by a linear pencil with larger size and the customary methods for the generalised eigenvalue problem, like for instance the QZ algorithm, are applied. This thesis is addressed to the design and analysis of algorithms for the polynomial eigenvalue problem based on a root-finding approach. A root-finder is applied to the characteristic equation p(x)=0. In particular, we discuss algorithms based on the Ehrlich-Aberth iteration. The Ehrlich-Aberth iteration (EAI) is a method that simultaneously approximates all the roots of a (scalar) polynomial. In order to adapt the EAI to the numerical solution of a PEP, we propose a method based on the Jacobi formula; two implementation of the EAI are discussed, of which one uses a linearization and the other works directly on the matrix polynomial. The algorithm that we propose has quadratic computational complexity with respect to the degree k of the matrix polynomial. This leads to computational advantage when the ratio k^2/n, where n is the dimension of the matrix coefficients, is large. Cases of this kind can be encountered, for instance, in the truncation of matrix power series. If k^2/n is small, the EAI can be implemented in such a way that its asymptotic complexity is cubic (or slightly supercubic) in nk, but QZ-based methods appear to be faster in this case. Nevertheless, experiments suggest that the EAI can improve the approximations of the QZ in terms of forward error, so that even when it is not as fast as other algorithms it is still suitable as a refinement method. The EAI does not compute the eigenvectors. If they are needed, the EAI can be combined with other methods such as the SVD or the inverse iteration. In the experiments we performed, eigenvectors were computed in this way, and they were approximated with higher accuracy with respect to the QZ. Another root-finding approach to PEPs, similar to the EAI, is to apply in sequence the Newton method to each single eigenvalue, using an implicit deflation of the previously computed roots of the determinant in order to avoid to approximate twice the same eigenvalue. Our numerical experience suggests that in terms of efficiency the EAI is superior with respect to the sequential Newton method with deflation. Specific attention concerns structured problems where the matrix coefficients have some additional feature which is reflected on structural properties of the roots. For instance, in the case of T-palindromic polynomials, the roots are encountered in pairs (x,1/x). In this case the goal is to design algorithms which take advantage of this additional information about the eigenvalues and deliver approximations to the eigenvalues which respect these symmetries independently of the rounding errors. Within this setting, we study the case of polynomials endowed with specific properties like, for instance, palindromic, T-palindromic, Hamiltonian, symplectic, even/odd, etc., whose eigenvalues have special symmetries in the complex plane. In general, we may consider the case of structures where the roots can be grouped in pairs as (x,f(x)), where f(x) is any self-inverse analytic function such that. We propose a unifying treatment of structured polynomials belonging to this class and show how the EAI can be adapted to deal with them in a very effective way. Several structured variants of the EAI are available to this goal: they are described in this thesis. All of such variants enable one to compute only a subset of eigenvalues and to recover the remaining part of the spectrum by means of the symmetries satisfied by the eigenvalues. By exploiting the structure of the problem, this approach leads to a saving on the number of floating point operations and provides algorithms which yield numerical approximations fulfilling the symmetry properties. Our research on the structured EAI can of course be applied also to scalar polynomials: in the next future, we plan to exploit our results and design new features for the software MPSolve. When studying the theoretical properties of the change of variable, useful to design one of the structured EAI methods, we had the chance to discover some theorems on the behaviour of the complete eigenstructure of a matrix polynomial under a rational change of variable. Such results are discussed in this thesis. Some, but not all, of the different structured versions of the EAI algorithm have a drawback: accuracy is lost for eigenvalues that are close to a finite number of critical values, called exceptional eigenvalues. On the other hand, it turns out that at least for some specific structures the versions that suffer from this problem are also the most efficient ones: thus, it is desirable to circumvent the loss of accuracy. This can be done by the design of a structured refinement Newton algorithm. Besides its application to structured PEPs, this algorithm can have further application to the computation of the roots of scalar polynomials whose roots appear in pairs. In this thesis, we also present the results of several numerical experiments performed in order to test the effectiveness of our approach in terms of speed and of accuracy. We have compared the Ehrlich-Aberth iteration with the Matlab functions polyeig and quadeig. In the structured case, we have also considered, when available, other structured methods, say, the URV algorithm by Schroeder . Moreover, the different versions of our algorithm are compared one with another
    corecore