17 research outputs found

    Asymptotically fast polynomial matrix algorithms for multivariable systems

    Full text link
    We present the asymptotically fastest known algorithms for some basic problems on univariate polynomial matrices: rank, nullspace, determinant, generic inverse, reduced form. We show that they essentially can be reduced to two computer algebra techniques, minimal basis computations and matrix fraction expansion/reconstruction, and to polynomial matrix multiplication. Such reductions eventually imply that all these problems can be solved in about the same amount of time as polynomial matrix multiplication

    Fast Computation of Minimal Interpolation Bases in Popov Form for Arbitrary Shifts

    Get PDF
    We compute minimal bases of solutions for a general interpolation problem, which encompasses Hermite-Pad\'e approximation and constrained multivariate interpolation, and has applications in coding theory and security. This problem asks to find univariate polynomial relations between mm vectors of size σ\sigma; these relations should have small degree with respect to an input degree shift. For an arbitrary shift, we propose an algorithm for the computation of an interpolation basis in shifted Popov normal form with a cost of O ~(mω1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) field operations, where ω\omega is the exponent of matrix multiplication and the notation O ~()\mathcal{O}\tilde{~}(\cdot) indicates that logarithmic terms are omitted. Earlier works, in the case of Hermite-Pad\'e approximation and in the general interpolation case, compute non-normalized bases. Since for arbitrary shifts such bases may have size Θ(m2σ)\Theta(m^2 \sigma), the cost bound O ~(mω1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) was feasible only with restrictive assumptions on the shift that ensure small output sizes. The question of handling arbitrary shifts with the same complexity bound was left open. To obtain the target cost for any shift, we strengthen the properties of the output bases, and of those obtained during the course of the algorithm: all the bases are computed in shifted Popov form, whose size is always O(mσ)\mathcal{O}(m \sigma). Then, we design a divide-and-conquer scheme. We recursively reduce the initial interpolation problem to sub-problems with more convenient shifts by first computing information on the degrees of the intermediate bases.Comment: 8 pages, sig-alternate class, 4 figures (problems and algorithms

    Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix

    Get PDF
    Given a nonsingular n×nn \times n matrix of univariate polynomials over a field K\mathbb{K}, we give fast and deterministic algorithms to compute its determinant and its Hermite normal form. Our algorithms use O~(nωs)\widetilde{\mathcal{O}}(n^\omega \lceil s \rceil) operations in K\mathbb{K}, where ss is bounded from above by both the average of the degrees of the rows and that of the columns of the matrix and ω\omega is the exponent of matrix multiplication. The soft-OO notation indicates that logarithmic factors in the big-OO are omitted while the ceiling function indicates that the cost is O~(nω)\widetilde{\mathcal{O}}(n^\omega) when s=o(1)s = o(1). Our algorithms are based on a fast and deterministic triangularization method for computing the diagonal entries of the Hermite form of a nonsingular matrix.Comment: 34 pages, 3 algorithm

    Algorithms for Simultaneous Pad\'e Approximations

    Get PDF
    We describe how to solve simultaneous Pad\'e approximations over a power series ring K[[x]]K[[x]] for a field KK using O (nω1d)O~(n^{\omega - 1} d) operations in KK, where dd is the sought precision and nn is the number of power series to approximate. We develop two algorithms using different approaches. Both algorithms return a reduced sub-bases that generates the complete set of solutions to the input approximations problem that satisfy the given degree constraints. Our results are made possible by recent breakthroughs in fast computations of minimal approximant bases and Hermite Pad\'e approximations.Comment: ISSAC 201

    Contents of Volume 41

    Get PDF

    Fast Computation of Shifted Popov Forms of Polynomial Matrices via Systems of Modular Polynomial Equations

    Get PDF
    We give a Las Vegas algorithm which computes the shifted Popov form of an m×mm \times m nonsingular polynomial matrix of degree dd in expected O~(mωd)\widetilde{\mathcal{O}}(m^\omega d) field operations, where ω\omega is the exponent of matrix multiplication and O~()\widetilde{\mathcal{O}}(\cdot) indicates that logarithmic factors are omitted. This is the first algorithm in O~(mωd)\widetilde{\mathcal{O}}(m^\omega d) for shifted row reduction with arbitrary shifts. Using partial linearization, we reduce the problem to the case dσ/md \le \lceil \sigma/m \rceil where σ\sigma is the generic determinant bound, with σ/m\sigma / m bounded from above by both the average row degree and the average column degree of the matrix. The cost above becomes O~(mωσ/m)\widetilde{\mathcal{O}}(m^\omega \lceil \sigma/m \rceil), improving upon the cost of the fastest previously known algorithm for row reduction, which is deterministic. Our algorithm first builds a system of modular equations whose solution set is the row space of the input matrix, and then finds the basis in shifted Popov form of this set. We give a deterministic algorithm for this second step supporting arbitrary moduli in O~(mω1σ)\widetilde{\mathcal{O}}(m^{\omega-1} \sigma) field operations, where mm is the number of unknowns and σ\sigma is the sum of the degrees of the moduli. This extends previous results with the same cost bound in the specific cases of order basis computation and M-Pad\'e approximation, in which the moduli are products of known linear factors.Comment: 8 pages, sig-alternate class, 5 figures (problems and algorithms
    corecore