15 research outputs found

    Fast linear algebra is stable

    Full text link
    In an earlier paper, we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nn-by-nn matrices can be done by any algorithm in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0, then it can be done stably in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(nω+η)O(n^{\omega + \eta}) operations.Comment: 26 pages; final version; to appear in Numerische Mathemati

    Homotopy Method for the Large, Sparse, Real Nonsymmetric Eigenvalue Problem

    Get PDF
    A homotopy method to compute the eigenpairs, i.e., the eigenvectors and eigenvalues, of a given real matrix A1 is presented. From the eigenpairs of some real matrix A0, the eigenpairs of A(t) ≡ (1 − t)A0 + tA1 are followed at successive "times" from t = 0 to t = 1 using continuation. At t = 1, the eigenpairs of the desired matrix A1 are found. The following phenomena are present when following the eigenpairs of a general nonsymmetric matrix: • bifurcation, • ill conditioning due to nonorthogonal eigenvectors, • jumping of eigenpaths. These can present considerable computational difficulties. Since each eigenpair can be followed independently, this algorithm is ideal for concurrent computers. The homotopy method has the potential to compete with other algorithms for computing a few eigenvalues of large, sparse matrices. It may be a useful tool for determining the stability of a solution of a PDE. Some numerical results will be presented

    A new scaling for Newton's iteration for the polar decomposition and its backward stability

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.1137/070699895.We propose a scaling scheme for Newton's iteration for calculating the polar decomposition. The scaling factors are generated by a simple scalar iteration in which the initial value depends only on estimates of the extreme singular values of the original matrix, which can, for example, be the Frobenius norms of the matrix and its inverse. In exact arithmetic, for matrices with condition number no greater than 101610^{16}, with this scaling scheme no more than 9 iterations are needed for convergence to the unitary polar factor with a convergence tolerance roughly equal to 101610^{-16}. It is proved that if matrix inverses computed in finite precision arithmetic satisfy a backward-forward error model, then the numerical method is backward stable. It is also proved that Newton's method with Higham's scaling or with Frobenius norm scaling is backward stable

    A Structure-Preserving Divide-and-Conquer Method for Pseudosymmetric Matrices

    Get PDF
    We devise a spectral divide-and-conquer scheme for matrices that are self-adjoint with respect to a given indefinite scalar product (i.e. pseudosymmetic matrices). The pseudosymmetric structure of the matrix is preserved in the spectral division, such that the method can be applied recursively to achieve full diagonalization. The method is well-suited for structured matrices that come up in computational quantum physics and chemistry. In this application context, additional definiteness properties guarantee a convergence of the matrix sign function iteration within two steps when Zolotarev functions are used. The steps are easily parallelizable. Furthermore, it is shown that the matrix decouples into symmetric definite eigenvalue problems after just one step of spectral division

    Solving the ME/ME/1 queue with state-space methods and the matrix sign function

    Get PDF
    Cataloged from PDF version of article.Matrix exponential (ME) distributions not only include the well-known class of phase-type distributions but also can be used to approximate more general distributions (e.g., deterministic, heavy-tailed, etc.). In this paper, a novel mathematical framework and a numerical algorithm are proposed to calculate the matrix exponential representation for the steady-state waiting time in an ME/ME/1 queue. Using state–space algebra, the waiting time calculation problem is shown to reduce to finding the solution of an ordinary differential equation in state–space form with order being the sum of the dimensionalities of the inter-arrival and service time distribution representations. A numerically efficient algorithm with quadratic convergence rates based on the matrix sign function iterations is proposed to find the boundary conditions of the differential equation. The overall algorithm does not involve any transform domain calculations such as root finding or polynomial factorization, which are known to have potential numerical stability problems. Numerical examples are provided to demonstrate the effectiveness of the proposed approach. © 2004 Elsevier B.V. All rights reserved

    Inverting a complex matrix

    Full text link
    We analyze a complex matrix inversion algorithm proposed by Frobenius, which we call the Frobenius inversion. We show that the Frobenius inversion uses the least number of real matrix multiplications and inversions among all complex matrix inversion algorithms. We also analyze numerical properties of the Frobenius inversion. We prove that the Frobenius inversion runs faster than the widely used method based on LU decomposition if and only if the ratio of the running time of the real matrix inversion to that of the real matrix multiplication is greater than 5/45/4. We corroborate this theoretical result by numerical experiments. Moreover, we apply the Frobenius inversion to matrix sign function, Sylvester equation, and polar decomposition. In each of these examples, the Frobenius inversion is more efficient than inversion via LU-decomposition

    Efficient Numerical Algorithms for Balanced Stochastic Truncation

    Get PDF
    We propose an efficient numerical algorithm for relative error model reduction based on balanced stochastic truncation. The method uses full-rank factors of the Gramians to be balanced versus each other and exploits the fact that for large-scale systems these Gramians are often of low numerical rank. We use the easy-to-parallelize sign function method as the major computational tool in determining these full-rank factors and demonstrate the numerical performance of the suggested implementation of balanced stochastic truncation model reduction

    Stable and Efficient Computation of Generalized Polar Decompositions

    Get PDF

    An Arithmetic for Matrix Pencils: Theory and New Algorithms

    Get PDF
    This paper introduces arithmetic-like operations on matrix pencils. The pencil-arithmetic operations extend elementary formulas for sums and products of rational numbers and include the algebra of linear transformations as a special case. These operations give an unusual perspective on a variety of pencil related computations. We derive generalizations of monodromy matrices and the matrix exponential. A new algorithm for computing a pencil arithmetic generalization of the matrix sign function does not use matrix inverses and gives an empirically forward numerically stable algorithm for extracting deflating subspaces
    corecore