29 research outputs found

    Perturbation splitting for more accurate eigenvalues

    Get PDF
    Let TT be a symmetric tridiagonal matrix with entries and eigenvalues of different magnitudes. For some TT, small entrywise relative perturbations induce small errors in the eigenvalues, independently of the size of the entries of the matrix; this is certainly true when the perturbed matrix can be written as T~=XTTX\widetilde{T}=X^{T}TX with small ∣∣XTX−I∣∣||X^{T}X-I||. Even if it is not possible to express in this way the perturbations in every entry of TT, much can be gained by doing so for as many as possible entries of larger magnitude. We propose a technique which consists of splitting multiplicative and additive perturbations to produce new error bounds which, for some matrices, are much sharper than the usual ones. Such bounds may be useful in the development of improved software for the tridiagonal eigenvalue problem, and we describe their role in the context of a mixed precision bisection-like procedure. Using the very same idea of splitting perturbations (multiplicative and additive), we show that when TT defines well its eigenvalues, the numerical values of the pivots in the usual decomposition T−λI=LDLTT-\lambda I=LDL^{T} may be used to compute approximations with high relative precision.Fundação para a Ciência e Tecnologia (FCT) - POCI 201

    On simple bounds for eigenvalues of symmetric tridiagonal matrices

    Get PDF
    How much can be said about the location of the eigenvalues of a symmetric tridiagonal matrix just by looking at its diagonal entries? We use classical results on the eigenvalues of symmetric matrices to show that the diagonal entries are bounds for some of the eigenvalues regardless of the size of the off-diagonal entries. Numerical examples are given to illustrate that our arithmetic-free technique delivers useful information on the location of the eigenvalues.FEDER Funds through “Programa Operacional Factores de Competitividade - COMPETE

    The geometric mean algorithm

    Get PDF
    Bisection (of a real interval) is a well known algorithm to compute eigenvalues of symmetric matrices. Given an initial interval [a,b], convergence to an eigenvalue which has size much smaller than a or b may be made considerably faster if one replaces the usual arithmetic mean (of the end points of the current interval) with the geometric mean. Exploring this idea, we have implemented geometric bisection in a Matlab code. We illustrate the effectiveness of our algorithm in the context of the computation of the eigenvalues of a symmetric tridiagonal matrix which has a very large condition number.Fundação para a Ciência e a Tecnologia (FCT

    Reliable eigenvalues of symmetric tridiagonals

    Get PDF
    For the eigenvalues of a symmetric tridiagonal matrix T, the most accurate algorithms deliver approximations which are the exact eigenvalues of a matrix whose entries differ from the corresponding entries of T by small relative perturbations. However, for matrices with eigenvalues of different magnitudes, the number of correct digits in the computed approximations for eigenvalues of size smaller than ‖T‖₂ depends on how well such eigenvalues are defined by the data. Some classes of matrices are known to define their eigenvalues to high relative accuracy but, in general, there is no simple way to estimate well the number of correct digits in the approximations. To remedy this, we propose a method that provides sharp bounds for the eigenvalues of T. We present some numerical examples to illustrate the usefulness of our method.FEDER (Programa Operacional Factores de Competitividade)FCT (Projecto PEst-C/MAT/UI0013/201

    Aventuras numéricas no cálculo do e

    Get PDF
    Faz-se uma análise dos erros de arredondamento no cálculo de aproximações do número de Neper com a expressão (1+1/n)^n.Fundação para a Ciência e a Tecnologi

    Computing the square roots of matrices with central symmetry

    Get PDF
    For computing square roots of a nonsingular matrix A, which are functions of A, two well known fast and stable algorithms, which are based on the Schur decomposition of A, were proposed by Bj¨ork and Hammarling [3], for square roots of general complex matrices, and by Higham [10], for real square roots of real matrices. In this paper we further consider (the computation of) the square roots of matrices with central symmetry. We first investigate the structure of the square roots of these matrices and then develop several algorithms for computing the square roots. We show that our algorithms ensure significant savings in computational costs as compared to the use of standard algorithms for arbitrary matrices.Fundação para a Ciência e a Tecnologia (FCT

    On inverse eigenvalue problems for block Toeplitz matrices with Toeplitz blocks

    Get PDF
    We propose an algorithm for solving the inverse eigenvalue problem for real symmetric block Toeplitz matrices with symmetric Toeplitz blocks. It is based upon an algorithm which has been used before by others to solve the inverse eigenvalue problem for general real symmetric matrices and also for Toeplitz matrices. First we expose the structure of the eigenvectors of the so-called generalized centrosymmetric matrices. Then we explore the properties of the eigenvectors to derive an efficient algorithm that is able to deliver a matrix with the required structure and spectrum. We have implemented our ideas in a Matlab code. Numerical results produced with this code are included.Fundação para a Ciência e a Tecnologia (FCT

    Minimization problems for certain structured matrices

    Get PDF
    For given Z,B∈Cn×kZ,B\in \mathbb{ C}^{n\times k}, the problem of finding A∈Cn×nA\in \mathbb{C}^{n\times n}, in some prescribed class W{\cal W}, that minimizes ∥AZ−B∥\|AZ-B\| (Frobenius norm) has been considered by different authors for distinct classes W{\cal W}. Here, we study this minimization problem for two other classes which include the symmetric Hamiltonian, symmetric skew-Hamiltonian, real orthogonal symplectic and unitary conjugate symplectic matrices. We also consider (as others have done for other classes W{\cal W}) the problem of minimizing ∥A−A~∥\|A-\tilde{A}\| where A~\tilde{A} is given and AA is a solution of the previous problem. The key idea of our contribution is the reduction of each one of the above minimization problems to two independent subproblems in orthogonal subspaces of Cn×n\mathbb{C}^{n\times n}. This is possible due to the special structures under consideration. We have developed MATLAB codes and present the numerical results of some tests.National Natural Science Foundation of China, no. 11371075

    Blocked schur algorithms for computing the matrix square root

    Get PDF
    Applied Parallel and Scientific Computing: 11th International Conference, PARA 2012, Helsinki, Finland, June 10-13, 2012, Revised Selected Papers.The Schur method for computing a matrix square root reduces the matrix to Schur triangular form and then computes a square root of the triangular matrix. We show that by using either a standard blocking or recursive blocking the computation of the square root of the triangular matrix can be made rich in matrix multiplication. Numerical experiments making appropriate use of level 3 BLAS show significant speedups over the point algorithm, both in the square root phase and in the algorithm as a whole. In parallel implemetnations, recursive blocking is found to provide better performance than standard blocking when parallelism comes only from threaded BLAS, but the reverse is true when parallelism is explicitly expressed using OpenMP. The excellent numerical stability of the point algorithm is shown to be preserved by blocking. These results are extended to the real Schur method. Blocking is also shown to be effective for multiplying triangular matrices

    Structure-preserving Schur methods for computing square roots of real skew-Hamiltonian matrices

    Full text link
    Our contribution is two-folded. First, starting from the known fact that every real skew-Hamiltonian matrix has a real Hamiltonian square root, we give a complete characterization of the square roots of a real skew-Hamiltonian matrix W. Second, we propose a structure exploiting method for computing square roots of W. Compared to the standard real Schur method, which ignores the structure, our method requires significantly less arithmetic.Comment: 27 pages; Conference "Directions in Matrix Theory 2011", July 2011, University of Coimbra, Portuga
    corecore