MIMS EPrints
Not a member yet
    2151 research outputs found

    Mixed-Precision Paterson--Stockmeyer Method for Evaluating Polynomials of Matrices

    Get PDF
    The Paterson--Stockmeyer method is an evaluation scheme for matrix polynomials with scalar coefficients that arise in many state-of-the-art algorithms based on polynomial or rational approximation, for example, those for computing transcendental matrix functions. We derive a mixed-precision version of the Paterson--Stockmeyer method that is particularly useful for evaluating matrix polynomials with scalar coefficients of decaying magnitude. The key idea is to perform computations on data of small magnitude in low precision, and rounding error analysis is provided for the use of lower-than-working precisions. We focus on the evaluation of the Taylor approximants of the matrix exponential and show the applicability of our method to the existing scaling and squaring algorithms, particularly when the norm of the input matrix (which in practical algorithms is often scaled towards to origin) is sufficiently small. We also demonstrate through experiments the general applicability of our method to the computation of the polynomials from the Pad\'e approximant of the matrix exponential and the Taylor approximant of the matrix cosine. Numerical experiments show our mixed-precision Paterson--Stockmeyer algorithms can be more efficient than its fixed-precision counterpart while delivering the same level of accuracy

    The Power of Bidiagonal Matrices

    Get PDF
    Bidiagonal matrices are widespread in numerical linear algebra, not least because of their use in the standard algorithm for computing the singular value decomposition and their appearance as LU factors of tridiagonal matrices. We show that bidiagonal matrices have a number of interesting properties that make them powerful tools in a variety of problems, especially when they are multiplied together. We show that the inverse of a product of bidiagonal matrices is insensitive to small componentwise relative perturbations in the factors if the factors or their inverses are nonnegative. We derive componentwise rounding error bounds for the solution of a linear system Ax=bAx = b, where AA or A1A^{-1} is a product B1B2BkB_1 B_2\dots B_k of bidiagonal matrices, showing that strong results are obtained when the BiB_i are nonnegative or have a checkerboard sign pattern. We show that given the \fact\ of an n×nn\times n totally nonnegative matrix AA into the product of bidiagonal matrices, \normo{A^{-1}} can be computed in O(n2)O(n^2) flops and that in floating-point arithmetic the computed result has small relative error, no matter how large \normo{A^{-1}} is. We also show how factorizations involving bidiagonal matrices of some special matrices, such as the Frank matrix and the Kac--Murdock--Szeg\"o matrix, yield simple proofs of the total nonnegativity and other properties of these matrices

    On the Cross-Shaped Matrices

    Get PDF
    A cross-shaped matrix X\in\C^{n\times n} has nonzero elements located on the main diagonal and the anti-diagonal, so that the sparsity pattern has the shape of a cross. It is shown that XX can be factorized into products of identity-plus-rank-two matrices and can be symmetrically permuted to block diagonal form with 2×22\times 2 diagonal blocks and, if nn is odd, a 1×11\times 1 diagonal block. Exploiting these properties we derive explicit formulae for its determinant, inverse, and characteristic polynomial

    On the Cross-Shaped Matrices

    Get PDF
    A cross-shaped matrix X\in\C^{n\times n} has nonzero elements located on the main diagonal and the anti-diagonal, so that the sparsity pattern has the shape of a cross. It is shown that XX can be factorized into products of identity-plus-rank-two matrices and can be symmetrically permuted to block diagonal form with 2×22\times 2 diagonal blocks and, if nn is odd, a 1×11\times 1 diagonal block. Exploiting these properties we derive explicit formulae for its determinant, inverse, and characteristic polynomial

    ML-SGFEM User Guide

    No full text
    This is a User Guide for the MATLAB toolbox ML-SGFEM. The software can be used to investigate computational issues associated with multilevel stochastic Galerkin finite element approximation for elliptic PDEs with parameter-dependent coefficients. The distinctive feature of the software is the hierarchical a posteriori error estimation strategy it uses to drive the adaptive enrichment of the approximation space at each step. This document contains installation instructions, a brief mathematical description of the methodology, a sample session and a description of the directory structure

    Matrix Multiplication in Multiword Arithmetic: Error Analysis and Application to GPU Tensor Cores

    Get PDF
    In multiword arithmetic, a matrix is represented as the unevaluated sum of two or more lower-precision matrices, and a matrix product is formed by multiplying the constituents in low precision. We investigate the use of multiword arithmetic for improving the performance-accuracy tradeoff of matrix multiplication with mixed precision block fused multiply-add (FMA) hardware, focusing especially on the tensor cores available on NVIDIA GPUs. Building on a general block FMA framework, we develop a comprehensive error analysis of multiword matrix multiplication. After confirming the theoretical error bounds experimentally by simulating low precision in software, we use the cuBLAS and CUTLASS libraries to implement a number of matrix multiplication algorithms using double-fp16 (double-binary16) arithmetic. When running the algorithms on NVIDIA V100 and A100 GPUs, we find that double-fp16 is not as accurate as fp32 (binary32) arithmetic despite satisfying the same worst-case error bound. Using probabilistic error analysis, we explain why this issue is likely to be caused by the rounding mode used by the NVIDIA tensor cores, and propose a parameterized blocked summation algorithm that alleviates the problem and significantly improves the performance-accuracy tradeoff

    Probabilistic Rounding Error Analysis of Householder QR Factorization

    Get PDF
    The standard worst-case normwise backward error bound for Householder QR factorization of an m×nm\times n matrix is proportional to mnumnu, where uu is the unit roundoff. We prove that the bound can be replaced by one proportional to mnu\sqrt{mn}u that holds with high probability if the rounding errors are mean independent and of mean zero and if the normwise backward errors in applying a sequence of m×mm\times m Householder matrices to a vector satisfy bounds proportional to mu\sqrt{m}u with probability 11. The proof makes use of a matrix concentration inequality. The same square rooting of the error constant applies to two-sided transformations by Householder matrices and hence to standard QR-type algorithms for computing eigenvalues and singular values. It also applies to Givens QR factorization. These results complement recent probabilistic rounding error analysis results for inner-product based algorithms and show that the square rooting effect is widespread in numerical linear algebra. Our numerical experiments, which make use of a new backward error formula for QR factorization, show that the probabilistic bounds give a much better indicator of the actual backward errors and their rate of growth than the worst-case bounds

    Computing the square root of a low-rank perturbation of the scaled identity matrix

    Get PDF
    We consider the problem of computing the square root of a perturbation of the scaled identity matrix, A = α Iₙ + UVᴴ, where U and V are n × k matrices with k ≤ n. This problem arises in various applications, including computer vision and optimization methods for machine learning. We derive a new formula for the pth root of A that involves a weighted sum of powers of the pth root of the k × k matrix α Iₖ + VᴴU. This formula is particularly attractive for the square root, since the sum has just one term when p = 2. We also derive a new class of Newton iterations for computing the square root that exploit the low-rank structure. We test these new methods on random matrices and on positive definite matrices arising in applications. Numerical experiments show that the new approaches can yield a much smaller residual than existing alternatives and can be significantly faster when the perturbation UVᴴ has low rank

    Probabilistic Rounding Error Analysis of Householder QR Factorization

    Get PDF
    When an m×nm\times n matrix is premultiplied by a product of nn Householder matrices the worst-case normwise rounding error bound is proportional to mnumnu, where uu is the unit roundoff. We prove that this bound can be replaced by one proportional to mnu\sqrt{mn}u that holds with high probability if the rounding errors are mean independent and of mean zero, under the assumption that a certain bound holds with probability 11. The proof makes use of a matrix concentration inequality. In particular, this result applies to Householder QR factorization. The same square rooting of the error constant applies to two-sided transformations by Householder matrices and hence to standard QR-type algorithms for computing eigenvalues and singular values. It also applies to Givens QR factorization. These results complement recent probabilistic rounding error analysis results for inner-product based algorithms and show that the square rooting effect is widespread in numerical linear algebra. Our numerical experiments, which make use of a new backward error formula for QR factorization, show that the probabilistic bounds give a much better indicator of the actual backward errors and their rate of growth than the worst-case bounds

    Hamilton's Discovery of the Quaternions and Creativity in Mathematics

    Get PDF
    Creativity can be defined as the process of forming new patterns from pre-existing component parts. As an illustration, we explain how Hamilton's discovery of the quaternions, and Cayley and Graves's subsequent discoveries of the octonions, could have resulted from considering the properties of complex numbers and asking for each one, ``how might this be different?''. We give some general suggestions on differences to look for and explain why creativity can be much richer when people work in small groups rather than individually

    996

    full texts

    2,151

    metadata records
    Updated in last 30 days.
    MIMS EPrints is based in United Kingdom
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇