53 research outputs found

    Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix

    Get PDF
    We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n x n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n,d)=softO(n^omega d) operations, with omega the exponent of matrix multiplication over K, then the algorithm uses softO(MM(n,d)) operations in K. The softO notation indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel high-order lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]-moduleComment: Research Report LIP RR2005-03, January 200

    On the complexity of inverting integer and polynomial matrices

    Get PDF
    Abstract. An algorithm is presented that probabilistically computes the exact inverse of a nonsingular n × n integer matrix A using (n 3 (log ||A|| + log κ(A))) 1+o(1) bit operations. Here, ||A|| = max ij |A ij | denotes the largest entry in absolute value, κ(A) := n||A −1 || ||A|| is the condition number of the input matrix and the "+o(1)" in the exponent indicates a missing factor c 1 (log n) c 2 (loglog ||A||) c 3 for positive real constants c 1 , c 2 , c 3 . A variation of the algorithm is presented for polynomial matrices that computes the inverse of a nonsingular n × n matrix whose entries are polynomials of degree d over a field using (n 3 d) 1+o(1) field operations. Both algorithms are randomized of the Las Vegas type: failure may be reported with probability at most 1/2, and if failure is not reported then the output is certified to be correct in the same running time bound

    Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections

    Get PDF
    Block projections have been used, in [Eberly et al. 2006], to obtain an efficient algorithm to find solutions for sparse systems of linear equations. A bound of softO(n^(2.5)) machine operations is obtained assuming that the input matrix can be multiplied by a vector with constant-sized entries in softO(n) machine operations. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections, and this has been conjectured. In this paper we establish the correctness of the algorithm from [Eberly et al. 2006] by proving the existence of efficient block projections over sufficiently large fields. We demonstrate the usefulness of these projections by deriving improved bounds for the cost of several matrix problems, considering, in particular, ``sparse'' matrices that can be be multiplied by a vector using softO(n) field operations. We show how to compute the inverse of a sparse matrix over a field F using an expected number of softO(n^(2.27)) operations in F. A basis for the null space of a sparse matrix, and a certification of its rank, are obtained at the same cost. An application to Kaltofen and Villard's Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an integer matrix yields algorithms requiring softO(n^(2.66)) machine operations. The derived algorithms are all probabilistic of the Las Vegas type

    Solving Sparse Integer Linear Systems

    Get PDF
    We propose a new algorithm to solve sparse linear systems of equations over the integers. This algorithm is based on a pp-adic lifting technique combined with the use of block matrices with structured blocks. It achieves a sub-cubic complexity in terms of machine operations subject to a conjecture on the effectiveness of certain sparse projections. A LinBox-based implementation of this algorithm is demonstrated, and emphasizes the practical benefits of this new method over the previous state of the art

    Algorithms for Simultaneous Pad\'e Approximations

    Get PDF
    We describe how to solve simultaneous Pad\'e approximations over a power series ring K[[x]]K[[x]] for a field KK using O (nω1d)O~(n^{\omega - 1} d) operations in KK, where dd is the sought precision and nn is the number of power series to approximate. We develop two algorithms using different approaches. Both algorithms return a reduced sub-bases that generates the complete set of solutions to the input approximations problem that satisfy the given degree constraints. Our results are made possible by recent breakthroughs in fast computations of minimal approximant bases and Hermite Pad\'e approximations.Comment: ISSAC 201

    A cubic algorithm for computing the Hermite normal form of a nonsingular integer matrix

    Full text link
    A Las Vegas randomized algorithm is given to compute the Hermite normal form of a nonsingular integer matrix AA of dimension nn. The algorithm uses quadratic integer multiplication and cubic matrix multiplication and has running time bounded by O(n3(logn+logA)2(logn)2)O(n^3 (\log n + \log ||A||)^2(\log n)^2) bit operations, where A=maxijAij||A||= \max_{ij} |A_{ij}| denotes the largest entry of AA in absolute value. A variant of the algorithm that uses pseudo-linear integer multiplication is given that has running time (n3logA)1+o(1)(n^3 \log ||A||)^{1+o(1)} bit operations, where the exponent "+o(1)""+o(1)" captures additional factors c1(logn)c2(loglogA)c3c_1 (\log n)^{c_2} (\log \log ||A||)^{c_3} for positive real constants c1,c2,c3c_1,c_2,c_3.Comment: 35 page

    Rank-profile revealing Gaussian elimination and the CUP matrix decomposition

    Get PDF
    Transforming a matrix over a field to echelon form, or decomposing the matrix as a product of structured matrices that reveal the rank profile, is a fundamental building block of computational exact linear algebra. This paper surveys the well known variations of such decompositions and transformations that have been proposed in the literature. We present an algorithm to compute the CUP decomposition of a matrix, adapted from the LSP algorithm of Ibarra, Moran and Hui (1982), and show reductions from the other most common Gaussian elimination based matrix transformations and decompositions to the CUP decomposition. We discuss the advantages of the CUP algorithm over other existing algorithms by studying time and space complexities: the asymptotic time complexity is rank sensitive, and comparing the constants of the leading terms, the algorithms for computing matrix invariants based on the CUP decomposition are always at least as good except in one case. We also show that the CUP algorithm, as well as the computation of other invariants such as transformation to reduced column echelon form using the CUP algorithm, all work in place, allowing for example to compute the inverse of a matrix on the same storage as the input matrix.Comment: 35 page

    On the complexity of inverting integer and polynomial matrices

    Get PDF
    Abstract An algorithm is presented that probabilistically computes the exact inverse of a nonsingular n × n integer matrix A using O˜(n 3 (log ||A|| + log κ(A))) bit operations. Here, ||A|| = max ij |A ij | denotes the largest entry in absolute value, κ(A) := ||A −1 || ||A|| is the condition number of the input matrix, and the soft-O notation O˜indicates some missing log n and log log ||A|| factors. A variation of the algorithm is presented for polynomial matrices. The inverse of any nonsingular n × n matrix whose entries are polynomials of degree d over a field can be computed using an expected number of O˜(n 3 d) field operations. Both algorithms are randomized of the Las Vegas type: fail may be returned with probability at most 1/2, and if fail is not returned the output is certified to be correct in the same running time bound

    Near Optimal Algorithms for Computing Smith Normal Forms of Integer Matrices

    No full text
    We present new algorithms for computing Smith normal forms of matrices over the integers and over the integers modulo d. For the case of matrices over ZZ d , we present an algorithm that computes the Smith form S of an A 2 ZZ n\Thetam d in only O(n `\Gamma1 m) operations from ZZ d . Here, ` is the exponent for matrix multiplication over rings: two n \Theta n matrices over a ring R can be multiplied in O(n ` ) operations from R. We apply our algorithm for matrices over ZZ d to get an algorithm for computing the Smith form S of an A 2 ZZ n\Thetam in O~(n `\Gamma1 m \Delta M(n log jjAjj)) bit operations (where jjAjj = max jA i;j j and M(t) bounds the cost of multiplying two dte-bit integers). These complexity results improve significantly on the complexity of previously best known Smith form algorithms (both deterministic and probabilistic) which guarantee correctness. 1 Introduction The Smith normal form is a canonical diagonal form for equivalence of matrices over a princ..
    corecore