938 research outputs found

    A local construction of the Smith normal form of a matrix polynomial

    Get PDF
    We present an algorithm for computing a Smith form with multipliers of a regular matrix polynomial over a field. This algorithm differs from previous ones in that it computes a local Smith form for each irreducible factor in the determinant separately and then combines them into a global Smith form, whereas other algorithms apply a sequence of unimodular row and column operations to the original matrix. The performance of the algorithm in exact arithmetic is reported for several test cases.Comment: 26 pages, 6 figures; introduction expanded, 10 references added, two additional tests performe

    New Structured Matrix Methods for Real and Complex Polynomial Root-finding

    Full text link
    We combine the known methods for univariate polynomial root-finding and for computations in the Frobenius matrix algebra with our novel techniques to advance numerical solution of a univariate polynomial equation, and in particular numerical approximation of the real roots of a polynomial. Our analysis and experiments show efficiency of the resulting algorithms.Comment: 18 page

    Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections

    Get PDF
    Block projections have been used, in [Eberly et al. 2006], to obtain an efficient algorithm to find solutions for sparse systems of linear equations. A bound of softO(n^(2.5)) machine operations is obtained assuming that the input matrix can be multiplied by a vector with constant-sized entries in softO(n) machine operations. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections, and this has been conjectured. In this paper we establish the correctness of the algorithm from [Eberly et al. 2006] by proving the existence of efficient block projections over sufficiently large fields. We demonstrate the usefulness of these projections by deriving improved bounds for the cost of several matrix problems, considering, in particular, ``sparse'' matrices that can be be multiplied by a vector using softO(n) field operations. We show how to compute the inverse of a sparse matrix over a field F using an expected number of softO(n^(2.27)) operations in F. A basis for the null space of a sparse matrix, and a certification of its rank, are obtained at the same cost. An application to Kaltofen and Villard's Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an integer matrix yields algorithms requiring softO(n^(2.66)) machine operations. The derived algorithms are all probabilistic of the Las Vegas type

    Preconditioning For Matrix Computation

    Full text link
    Preconditioning is a classical subject of numerical solution of linear systems of equations. The goal is to turn a linear system into another one which is easier to solve. The two central subjects of numerical matrix computations are LIN-SOLVE, that is, the solution of linear systems of equations and EIGEN-SOLVE, that is, the approximation of the eigenvalues and eigenvectors of a matrix. We focus on the former subject of LIN-SOLVE and show an application to EIGEN-SOLVE. We achieve our goal by applying randomized additive and multiplicative preconditioning. We facilitate the numerical solution by decreasing the condition of the coefficient matrix of the linear system, which enables reliable numerical solution of LIN-SOLVE. After the introduction in the Chapter 1 we recall the definitions and auxiliary results in Chapter 2. Then in Chapter 3 we precondition linear systems of equations solved at every iteration of the Inverse Power Method applied to EIGEN-SOLVE. These systems are ill conditioned, that is, have large condition numbers, and we decrease them by applying randomized additive preconditioning. This is our first subject. Our second subject is randomized multiplicative preconditioning for LIN-SOLVE. In this way we support application of GENP, that is, Gaussian elimination with no pivoting, and block Gaussian elimination. We prove that the proposed preconditioning methods are efficient when we apply Gaussian random matrices as preconditioners. We confirm these results with our extensive numerical tests. The tests also show that the same methods work as efficiently on the average when we use random structured, in particular circulant, preconditioners instead, but we show both formally and experimentally that these preconditioners fail in the case of LIN-SOLVE for the unitary matrix of discreet Fourier transform, for which Gaussian preconditioners work efficiently
    • …
    corecore