47 research outputs found

    On the computation of eigenvectors of a symmetric tridiagonal matrix: comparison of accuracy improvements of Givens and inverse iteration methods

    No full text
    15 pagesThe aim of this paper is the comparison of the recent improvements of two methods to compute eigenvectors of a symmetric tridiagonal matrix once the eigenvalues are computed. The first one is the Givens method which is based on the use of Sturm sequences. This method suffers from a lack of accuracy for the computation of the eigenvector when an approximate value (even a very accurate one) of the eigenvalue is used in the computational process. In [Godunov, S.K. and Antonov, A.G. and Kirilyuk, O.P. and Kostin, V.I., Guaranteed accuracy in numerical linear algebra, Mathematics and its Applications, Kluwer Academic Publishers, 1993] the authors introduce a modification of Givens method to ensure the computation of an accurate eigenvector from a good approximation of the corresponding eigenvalue. The second improvement concerns the inverse iteration method. In [Parlett, B.N. and Dhillon, I.S., Fernando's solution to Wilkinson's problem: An application of double factorization, Linear Algebra Appl., 267:247--279, 1997] the authors present a way to determine the best initial vector to start the iterations. Although the two methods and their improvements seem to be very different from a computational point of view, there exists some striking analogies. For instance, in the two methods we look for an optimal index, we have to minimize a residual, etc. In the paper we briefly present the two methods and investigate the connections between them

    Exact and inexact breakdowns in the block GMRES method

    Get PDF
    AbstractThis paper addresses the issue of breakdowns in the block GMRES method for solving linear systems with multiple right-hand sides of the form AX=B. An exact (inexact) breakdown occurs at iteration j of this method when the block Krylov matrix (B,AB,…,Aj−1B) is singular (almost singular). Exact breakdowns are the sign that a part of the exact solution is in the range of the Krylov matrix. They are primarily of theoretical interest. From a computational point of view, inexact breakdowns are most likely to occur. In such cases, the underlying block Arnoldi process that is used to build the block Krylov space should not be continued as usual. A natural way to continue the process is the use of deflation. However, as shown by Langou [J. Langou, Iterative Methods for Solving Linear Systems with Multiple Right-Hand Sides, Ph.D. dissertation TH/PA/03/24, CERFACS, France, 2003], deflation in block GMRES may lead to a loss of information that slows down the convergence. In this paper, instead of deflating the directions associated with almost converged solutions, these are kept and reintroduced in next iterations if necessary. Two criteria to detect inexact breakdowns are presented. One is based on the numerical rank of the generated block Krylov basis, the second on the numerical rank of the residual associated to approximate solutions. These criteria are analyzed and compared. Implementation details are discussed. Numerical results are reported

    Computing the distance to continuous-time instability of quadratic matrix polynomials

    Get PDF
    A bisection method is used to compute lower and upper bounds on the distance from a quadratic matrix polynomial to the set of quadratic matrix polynomials having an eigenvalue on the imaginary axis. Each bisection step requires to check whether an even quadratic matrix polynomial has a purely imaginary eigenvalue. First, an upper bound is obtained using Frobenius-type linearizations. It takes into account rounding errors but does not use the even structure. Then, lower and upper bounds are obtained by reducing the quadratic matrix polynomial to a linear palindromic pencil. The bounds obtained this way also take into account rounding errors. Numerical illustrations are presented.acceptedVersio

    Block-Arnoldi and Davidson methods for unsymmetric large eigenvalue problems

    Get PDF
    We present two methods for computing the leading eigenpairs of large sparse unsymmetric matrices. Namely the block-Arnoldi method and an adaptationof the Davidson method to unsymmetric matrices. We give some theoretical results concerning the convergence of these two methods when restarting is used and discuss implementation aspects of the two methods on an Alliant FX/80. Finally some results of numerical tests on a variety of matrices including matrices from the Harwell-Boeing test collection in which we compare these these two methods are reported

    On parabolic and elliptic spectral dichotomy

    Get PDF
    We discuss two spectral dichotomy techniques: one for computing an invariant subspace of a nonsymmetric matrix associated with the eigenvalues inside and outside a given parabola. Another for computing a right deflating subspace of a regular matrix pencil associated with the eigenvalues inside and outside a given ellipse. The techniques use matrices of order twice the order of the original matrices on which the spectral dichotomy by the unit circle and by the imaginary axis apply efficiently. We prove the equivalence between the condition number of the original problems and that of the transformed ones

    Adaptation de la méthode de Davidson à la résolution de systèmes linéaires : implémentation d'une version par blocs sur un multiprocesseur

    Get PDF
    La méthode de Davidson est habituellement utilisée dans les problèmes de valeurs propres symétriques. Dans cet article, nous l'adaptons à la résolution de systèmes linéaires creux de grande taille. Les aspects théoriques et pratiques de cette méthode sont étudiés ; en particulier nous montrons comment la méthode peut être accélérée à l'aide de préconditionnements. Des essais numériques sont présentés pour lesquels nous avons utilisé une version par blocs de la méthode qui permet la résolution simultanée de plusieurs systèmes de matrice identique et met en évidence des multiplications matrice-matrice ; nous donnons des résultats sur Cray2 qui confirment la bonne efficacité de notre implémentation sur un super calculateur

    A smallest singular value method for nonlinear eigenvalue problems

    No full text
    ANewton-type method for the eigenvalue problem of analytic matrixfunctions is described and analysed. The method finds the eigenvalueand eigenvector, respectively, as a point in the level set of thesmallest singular value function and the corresponding right singularvector. The algorithmic aspects are discussed and illustrated bynumerical examples

    On the stability of delayed linear discrete-time systems with periodic coefficients

    No full text
    Stability estimates are obtained for delayed linear periodic discretetimesystems. Bounds on the decay of the solution are derived via asuitable Lyapunov–Krasovskii-type functional and the solvability ofsome periodic discrete-time Lyapunov equations

    A note on the computation of invariant pairs of quadratic matrix polynomials

    No full text
    An algorithm is proposed for the computation of invariant pairs of quadratic matrix polynomials associated with the eigenvalues inside a given circle in the complex plane. The invariant pair of interest is derived from an SVD of the spectral projector of a linearization of the quadratic matrix polynomial which is computed from two matrix integrals by a spectral dichotomy-type approach. The performance of the method is illustrated by numerical experiments

    On alternating maximization algorithm for computing the hump of matrix powers

    No full text
    Alternating maximization type algorithms for computing the maximal growth of the norm of matrix powers are discussed. Their convergence properties are established under the natural assumption that the matrix is discrete-stable. The implementation considers both the small and large problem sizes, where for the latter case, a variant of the Lanczos method is especially devised. The numerical tests confirm that the main advantages of the alternating maximization technique are its accuracy and speed of convergence
    corecore