207 research outputs found

    Asymptotically fast polynomial matrix algorithms for multivariable systems

    Full text link
    We present the asymptotically fastest known algorithms for some basic problems on univariate polynomial matrices: rank, nullspace, determinant, generic inverse, reduced form. We show that they essentially can be reduced to two computer algebra techniques, minimal basis computations and matrix fraction expansion/reconstruction, and to polynomial matrix multiplication. Such reductions eventually imply that all these problems can be solved in about the same amount of time as polynomial matrix multiplication

    Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix

    Get PDF
    We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n x n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n,d)=softO(n^omega d) operations, with omega the exponent of matrix multiplication over K, then the algorithm uses softO(MM(n,d)) operations in K. The softO notation indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel high-order lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]-moduleComment: Research Report LIP RR2005-03, January 200

    Computing the Kalman form

    Get PDF
    We present two algorithms for the computation of the Kalman form of a linear control system. The first one is based on the technique developed by Keller-Gehrig for the computation of the characteristic polynomial. The cost is a logarithmic number of matrix multiplications. To our knowledge, this improves the best previously known algebraic complexity by an order of magnitude. Then we also present a cubic algorithm proven to more efficient in practice.Comment: 10 page

    Recent progress in linear algebra and lattice basis reduction (invited)

    Get PDF
    International audienceA general goal concerning fundamental linear algebra problems is to reduce the complexity estimates to essentially the same as that of multiplying two matrices (plus possibly a cost related to the input and output sizes). Among the bottlenecks one usually finds the questions of designing a recursive approach and mastering the sizes of the intermediately computed data. In this talk we are interested in two special cases of lattice basis reduction. We consider bases given by square matrices over K[x] or Z, with, respectively, the notion of reduced form and LLL reduction. Our purpose is to introduce basic tools for understanding how to generalize the Lehmer and Knuth-Schönhage gcd algorithms for basis reduction. Over K[x] this generalization is a key ingredient for giving a basis reduction algorithm whose complexity estimate is essentially that of multiplying two polynomial matrices. Such a problem relation between integer basis reduction and integer matrix multiplication is not known. The topic receives a lot of attention, and recent results on the subject show that there might be room for progressing on the question

    Fast Computation of Minimal Interpolation Bases in Popov Form for Arbitrary Shifts

    Get PDF
    We compute minimal bases of solutions for a general interpolation problem, which encompasses Hermite-Pad\'e approximation and constrained multivariate interpolation, and has applications in coding theory and security. This problem asks to find univariate polynomial relations between mm vectors of size σ\sigma; these relations should have small degree with respect to an input degree shift. For an arbitrary shift, we propose an algorithm for the computation of an interpolation basis in shifted Popov normal form with a cost of O ~(mω1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) field operations, where ω\omega is the exponent of matrix multiplication and the notation O ~()\mathcal{O}\tilde{~}(\cdot) indicates that logarithmic terms are omitted. Earlier works, in the case of Hermite-Pad\'e approximation and in the general interpolation case, compute non-normalized bases. Since for arbitrary shifts such bases may have size Θ(m2σ)\Theta(m^2 \sigma), the cost bound O ~(mω1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) was feasible only with restrictive assumptions on the shift that ensure small output sizes. The question of handling arbitrary shifts with the same complexity bound was left open. To obtain the target cost for any shift, we strengthen the properties of the output bases, and of those obtained during the course of the algorithm: all the bases are computed in shifted Popov form, whose size is always O(mσ)\mathcal{O}(m \sigma). Then, we design a divide-and-conquer scheme. We recursively reduce the initial interpolation problem to sub-problems with more convenient shifts by first computing information on the degrees of the intermediate bases.Comment: 8 pages, sig-alternate class, 4 figures (problems and algorithms

    Solving Sparse Integer Linear Systems

    Get PDF
    We propose a new algorithm to solve sparse linear systems of equations over the integers. This algorithm is based on a pp-adic lifting technique combined with the use of block matrices with structured blocks. It achieves a sub-cubic complexity in terms of machine operations subject to a conjecture on the effectiveness of certain sparse projections. A LinBox-based implementation of this algorithm is demonstrated, and emphasizes the practical benefits of this new method over the previous state of the art

    Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections

    Get PDF
    Block projections have been used, in [Eberly et al. 2006], to obtain an efficient algorithm to find solutions for sparse systems of linear equations. A bound of softO(n^(2.5)) machine operations is obtained assuming that the input matrix can be multiplied by a vector with constant-sized entries in softO(n) machine operations. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections, and this has been conjectured. In this paper we establish the correctness of the algorithm from [Eberly et al. 2006] by proving the existence of efficient block projections over sufficiently large fields. We demonstrate the usefulness of these projections by deriving improved bounds for the cost of several matrix problems, considering, in particular, ``sparse'' matrices that can be be multiplied by a vector using softO(n) field operations. We show how to compute the inverse of a sparse matrix over a field F using an expected number of softO(n^(2.27)) operations in F. A basis for the null space of a sparse matrix, and a certification of its rank, are obtained at the same cost. An application to Kaltofen and Villard's Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an integer matrix yields algorithms requiring softO(n^(2.66)) machine operations. The derived algorithms are all probabilistic of the Las Vegas type

    Analyse numérique et réduction de réseaux

    Get PDF
    29 pagesNational audienceL'algorithmique des réseaux euclidiens est un outil fréquemment utilisé en informatique et en mathématiques. Elle repose essentiellement sur la réduction LLL qu'il est donc important de rendre aussi efficace que possible. Une approche initiée par Schnorr consiste à effectuer des calculs approchés pour estimer les orthogonalisations de Gram-Schmidt sous-jacentes. Sans approximations, ces calculs dominent le coût de la réduction. Récemment, des outils classiques d'analyse numérique ont été revisités et améliorés, pour exploiter plus systématiquement l'idée de Schnorr et réduire les coûts. Nous décrivons ces développements, notamment comment l'algorithmique en nombres flottants peut être introduite à plusieurs niveaux dans la réduction
    corecore