373 research outputs found

    Fast Computation of Shifted Popov Forms of Polynomial Matrices via Systems of Modular Polynomial Equations

    Get PDF
    We give a Las Vegas algorithm which computes the shifted Popov form of an m×mm \times m nonsingular polynomial matrix of degree dd in expected O~(mωd)\widetilde{\mathcal{O}}(m^\omega d) field operations, where ω\omega is the exponent of matrix multiplication and O~()\widetilde{\mathcal{O}}(\cdot) indicates that logarithmic factors are omitted. This is the first algorithm in O~(mωd)\widetilde{\mathcal{O}}(m^\omega d) for shifted row reduction with arbitrary shifts. Using partial linearization, we reduce the problem to the case dσ/md \le \lceil \sigma/m \rceil where σ\sigma is the generic determinant bound, with σ/m\sigma / m bounded from above by both the average row degree and the average column degree of the matrix. The cost above becomes O~(mωσ/m)\widetilde{\mathcal{O}}(m^\omega \lceil \sigma/m \rceil), improving upon the cost of the fastest previously known algorithm for row reduction, which is deterministic. Our algorithm first builds a system of modular equations whose solution set is the row space of the input matrix, and then finds the basis in shifted Popov form of this set. We give a deterministic algorithm for this second step supporting arbitrary moduli in O~(mω1σ)\widetilde{\mathcal{O}}(m^{\omega-1} \sigma) field operations, where mm is the number of unknowns and σ\sigma is the sum of the degrees of the moduli. This extends previous results with the same cost bound in the specific cases of order basis computation and M-Pad\'e approximation, in which the moduli are products of known linear factors.Comment: 8 pages, sig-alternate class, 5 figures (problems and algorithms

    Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix

    Get PDF
    Given a nonsingular n×nn \times n matrix of univariate polynomials over a field K\mathbb{K}, we give fast and deterministic algorithms to compute its determinant and its Hermite normal form. Our algorithms use O~(nωs)\widetilde{\mathcal{O}}(n^\omega \lceil s \rceil) operations in K\mathbb{K}, where ss is bounded from above by both the average of the degrees of the rows and that of the columns of the matrix and ω\omega is the exponent of matrix multiplication. The soft-OO notation indicates that logarithmic factors in the big-OO are omitted while the ceiling function indicates that the cost is O~(nω)\widetilde{\mathcal{O}}(n^\omega) when s=o(1)s = o(1). Our algorithms are based on a fast and deterministic triangularization method for computing the diagonal entries of the Hermite form of a nonsingular matrix.Comment: 34 pages, 3 algorithm

    Algorithms for Simultaneous Pad\'e Approximations

    Get PDF
    We describe how to solve simultaneous Pad\'e approximations over a power series ring K[[x]]K[[x]] for a field KK using O (nω1d)O~(n^{\omega - 1} d) operations in KK, where dd is the sought precision and nn is the number of power series to approximate. We develop two algorithms using different approaches. Both algorithms return a reduced sub-bases that generates the complete set of solutions to the input approximations problem that satisfy the given degree constraints. Our results are made possible by recent breakthroughs in fast computations of minimal approximant bases and Hermite Pad\'e approximations.Comment: ISSAC 201

    Computing Matrix Canonical Forms of Ore Polynomials

    Get PDF
    We present algorithms to compute canonical forms of matrices of Ore polynomials while controlling intermediate expression swell. Given a square non-singular input matrix of Ore polynomials, we give an extension of the algorithm by Labhalla et al. 1992, to compute the Hermite form. We also give a new fraction-free algorithm to compute the Popov form, accompanied by an implementation and experimental results that compare it to the best known algorithms in the literature. Our algorithm is output-sensitive, with a cost that depends on the orthogonality defect of the input matrix: the sum of the row degrees in the input matrix minus the sum of the row degrees in the Popov form. We also use the recent advances in polynomial matrix computations, including fast inversion and rank profile computation, to describe an algorithm that computes the transformation matrix corresponding to the Popov form

    Fast Decoding of Codes in the Rank, Subspace, and Sum-Rank Metric

    Get PDF
    We speed up existing decoding algorithms for three code classes in different metrics: interleaved Gabidulin codes in the rank metric, lifted interleaved Gabidulin codes in the subspace metric, and linearized Reed-Solomon codes in the sum-rank metric. The speed-ups are achieved by reducing the core of the underlying computational problems of the decoders to one common tool: computing left and right approximant bases of matrices over skew polynomial rings. To accomplish this, we describe a skew-analogue of the existing PM-Basis algorithm for matrices over usual polynomials. This captures the bulk of the work in multiplication of skew polynomials, and the complexity benefit comes from existing algorithms performing this faster than in classical quadratic complexity. The new faster algorithms for the various decoding-related computational problems are interesting in their own and have further applications, in particular parts of decoders of several other codes and foundational problems related to the remainder-evaluation of skew polynomials

    Computing minimal interpolation bases

    Get PDF
    International audienceWe consider the problem of computing univariate polynomial matrices over afield that represent minimal solution bases for a general interpolationproblem, some forms of which are the vector M-Pad\'e approximation problem in[Van Barel and Bultheel, Numerical Algorithms 3, 1992] and the rationalinterpolation problem in [Beckermann and Labahn, SIAM J. Matrix Anal. Appl. 22,2000]. Particular instances of this problem include the bivariate interpolationsteps of Guruswami-Sudan hard-decision and K\"otter-Vardy soft-decisiondecodings of Reed-Solomon codes, the multivariate interpolation step oflist-decoding of folded Reed-Solomon codes, and Hermite-Pad\'e approximation. In the mentioned references, the problem is solved using iterative algorithmsbased on recurrence relations. Here, we discuss a fast, divide-and-conquerversion of this recurrence, taking advantage of fast matrix computations overthe scalars and over the polynomials. This new algorithm is deterministic, andfor computing shifted minimal bases of relations between mm vectors of sizeσ\sigma it uses O (mω1(σ+s))O~( m^{\omega-1} (\sigma + |s|) ) field operations, whereω\omega is the exponent of matrix multiplication, and s|s| is the sum of theentries of the input shift ss, with min(s)=0\min(s) = 0. This complexity boundimproves in particular on earlier algorithms in the case of bivariateinterpolation for soft decoding, while matching fastest existing algorithms forsimultaneous Hermite-Pad\'e approximation

    Sub-quadratic time for Riemann-Roch spaces. The case of smooth divisors over nodal plane projective curves

    Get PDF
    International audienceWe revisit the seminal Brill-Noether algorithm in the rather generic situation of smooth divisors over a nodal plane projective curve. Our approach takes advantage of fast algorithms for polynomials and structured matrices. We reach sub-quadratic time for computing a basis of a Riemann-Roch space. This improves upon previously known complexity bounds

    Computing syzygies in finite dimension using fast linear algebra

    Get PDF
    We consider the computation of syzygies of multivariate polynomials in afinite-dimensional setting: for a K[X1,,Xr]\mathbb{K}[X_1,\dots,X_r]-moduleM\mathcal{M} of finite dimension DD as a K\mathbb{K}-vector space, andgiven elements f1,,fmf_1,\dots,f_m in M\mathcal{M}, the problem is to computesyzygies between the fif_i's, that is, polynomials (p1,,pm)(p_1,\dots,p_m) inK[X1,,Xr]m\mathbb{K}[X_1,\dots,X_r]^m such that p1f1++pmfm=0p_1 f_1 + \dots + p_m f_m = 0 inM\mathcal{M}. Assuming that the multiplication matrices of the rrvariables with respect to some basis of M\mathcal{M} are known, we give analgorithm which computes the reduced Gr\"obner basis of the module of thesesyzygies, for any monomial order, using O(mDω1+rDωlog(D))O(m D^{\omega-1} + r D^\omega\log(D)) operations in the base field K\mathbb{K}, where ω\omega is theexponent of matrix multiplication. Furthermore, assuming that M\mathcal{M}is itself given as M=K[X1,,Xr]n/N\mathcal{M} = \mathbb{K}[X_1,\dots,X_r]^n/\mathcal{N},under some assumptions on N\mathcal{N} we show that these multiplicationmatrices can be computed from a Gr\"obner basis of N\mathcal{N} within thesame complexity bound. In particular, taking n=1n=1, m=1m=1 and f1=1f_1=1 inM\mathcal{M}, this yields a change of monomial order algorithm along thelines of the FGLM algorithm with a complexity bound which is sub-cubic inDD
    corecore