11 research outputs found

    Blocked schur algorithms for computing the matrix square root

    Get PDF
    Applied Parallel and Scientific Computing: 11th International Conference, PARA 2012, Helsinki, Finland, June 10-13, 2012, Revised Selected Papers.The Schur method for computing a matrix square root reduces the matrix to Schur triangular form and then computes a square root of the triangular matrix. We show that by using either a standard blocking or recursive blocking the computation of the square root of the triangular matrix can be made rich in matrix multiplication. Numerical experiments making appropriate use of level 3 BLAS show significant speedups over the point algorithm, both in the square root phase and in the algorithm as a whole. In parallel implemetnations, recursive blocking is found to provide better performance than standard blocking when parallelism comes only from threaded BLAS, but the reverse is true when parallelism is explicitly expressed using OpenMP. The excellent numerical stability of the point algorithm is shown to be preserved by blocking. These results are extended to the real Schur method. Blocking is also shown to be effective for multiplying triangular matrices

    Taylor's theorem for matrix functions with applications to condition number estimation

    Get PDF
    We derive an explicit formula for the remainder term of a Taylor polynomial of a matrix function. This formula generalizes a known result for the remainder of the Taylor polynomial for an analytic function of a complex scalar. We investigate some consequences of this result, which culminate in new upper bounds for the level-1 and level-2 condition numbers of a matrix function in terms of the pseudospectrum of the matrix. Numerical experiments show that, although the bounds can be pessimistic, they can be computed much faster than the standard methods. This makes the upper bounds ideal for a quick estimation of the condition number whilst a more accurate (and expensive) method can be used if further accuracy is required. They are also easily applicable to more complicated matrix functions for which no specialized condition number estimators are currently available

    Estimating the Condition Number of f(A)b

    No full text
    New algorithms are developed for estimating the condition number of f(A)bf(A)b, where AA is a matrix and bb is a vector. The condition number estimation algorithms for f(A)f(A) already available in the literature require the explicit computation of matrix functions and their Fr\'{e}chet derivatives and are therefore unsuitable for the large, sparse AA typically encountered in f(A)bf(A)b problems. The algorithms we propose here use only matrix-vector multiplications. They are based on a modified version of the power iteration for estimating the norm of the Fr\'{e}chet derivative of a matrix function, and work in conjunction with any existing algorithm for computing f(A)bf(A)b. The number of matrix-vector multiplications required to estimate the condition number is proportional to the square of the number of matrix-vector multiplications required by the underlying f(A)bf(A)b algorithm. We develop a specific version of our algorithm for estimating the condition number of eAbe^Ab, based on the algorithm of Al-Mohy and Higham [SIAM J. Matrix Anal. Appl., 30(4):1639--1657, 2009]. Numerical experiments demonstrate that our condition estimates are reliable and of reasonable cost

    A Catalogue of Software for Matrix Functions. Version 2.0

    No full text
    A catalogue of software for computing matrix functions and their Fr\'echet derivatives is presented. For a wide variety of languages and for software ranging from commercial products to open source packages we describe what matrix function codes are available and which algorithms they implement

    A Catalogue of Software for Matrix Functions. Version 1.0

    No full text
    A catalogue of software for computing matrix functions and their Fr\'echet derivatives is presented. For a wide variety of languages and for software ranging from commercial products to open source packages we describe what matrix function codes are available and which algorithms they implement

    Testing matrix function algorithms using identities

    No full text
    Algorithms for computing matrix functions are typically tested by comparing the forward error with the product of the condition number and the unit roundoff. The forward error is computed with the aid of a reference solution, typically computed at high precision. An alternative approach is to use functional identities such as the ``round trip tests'' elogA=Ae^{\log A} = A and (A1/p)p=A(A^{1/p})^p = A, as are currently employed in a SciPy test module. We show how a linearized perturbation analysis for a functional identity allows the determination of a maximum residual consistent with backward stability of the constituent matrix function evaluations. Comparison of this maximum residual with a computed residual provides a necessary test for backward stability. We also show how the actual linearized backward error for these relations can be computed. Our approach makes use of Fr\'echet derivatives and estimates of their norms. Numerical experiments show that the proposed approaches are able both to detect instability and to confirm stability

    A Recursive Blocked Schur Algorithm for Computing the Matrix Square Root

    No full text
    The Schur method for computing a matrix square root reduces the matrix to the Schur triangular form and then computes a square root of the triangular matrix. We show that by using a recursive blocking technique the computation of the square root of the triangular matrix can be made rich in matrix multiplication. Numerical experiments making appropriate use of level 3 BLAS show significant speedups over the point algorithm, both in the square root phase and in the algorithm as a whole. The excellent numerical stability of the point algorithm is shown to be preserved by recursive blocking. These results are extended to the real Schur method. Recursive blocking is also shown to be effective for multiplying triangular matrices
    corecore