46 research outputs found

    Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix

    Get PDF
    Given a nonsingular n×nn \times n matrix of univariate polynomials over a field K\mathbb{K}, we give fast and deterministic algorithms to compute its determinant and its Hermite normal form. Our algorithms use O~(nω⌈s⌉)\widetilde{\mathcal{O}}(n^\omega \lceil s \rceil) operations in K\mathbb{K}, where ss is bounded from above by both the average of the degrees of the rows and that of the columns of the matrix and ω\omega is the exponent of matrix multiplication. The soft-OO notation indicates that logarithmic factors in the big-OO are omitted while the ceiling function indicates that the cost is O~(nω)\widetilde{\mathcal{O}}(n^\omega) when s=o(1)s = o(1). Our algorithms are based on a fast and deterministic triangularization method for computing the diagonal entries of the Hermite form of a nonsingular matrix.Comment: 34 pages, 3 algorithm

    Computing syzygies in finite dimension using fast linear algebra

    Get PDF
    We consider the computation of syzygies of multivariate polynomials in afinite-dimensional setting: for a K[X1,
,Xr]\mathbb{K}[X_1,\dots,X_r]-moduleM\mathcal{M} of finite dimension DD as a K\mathbb{K}-vector space, andgiven elements f1,
,fmf_1,\dots,f_m in M\mathcal{M}, the problem is to computesyzygies between the fif_i's, that is, polynomials (p1,
,pm)(p_1,\dots,p_m) inK[X1,
,Xr]m\mathbb{K}[X_1,\dots,X_r]^m such that p1f1+⋯+pmfm=0p_1 f_1 + \dots + p_m f_m = 0 inM\mathcal{M}. Assuming that the multiplication matrices of the rrvariables with respect to some basis of M\mathcal{M} are known, we give analgorithm which computes the reduced Gr\"obner basis of the module of thesesyzygies, for any monomial order, using O(mDω−1+rDωlog⁥(D))O(m D^{\omega-1} + r D^\omega\log(D)) operations in the base field K\mathbb{K}, where ω\omega is theexponent of matrix multiplication. Furthermore, assuming that M\mathcal{M}is itself given as M=K[X1,
,Xr]n/N\mathcal{M} = \mathbb{K}[X_1,\dots,X_r]^n/\mathcal{N},under some assumptions on N\mathcal{N} we show that these multiplicationmatrices can be computed from a Gr\"obner basis of N\mathcal{N} within thesame complexity bound. In particular, taking n=1n=1, m=1m=1 and f1=1f_1=1 inM\mathcal{M}, this yields a change of monomial order algorithm along thelines of the FGLM algorithm with a complexity bound which is sub-cubic inDD

    Fast Computation of Minimal Interpolation Bases in Popov Form for Arbitrary Shifts

    Get PDF
    We compute minimal bases of solutions for a general interpolation problem, which encompasses Hermite-Pad\'e approximation and constrained multivariate interpolation, and has applications in coding theory and security. This problem asks to find univariate polynomial relations between mm vectors of size σ\sigma; these relations should have small degree with respect to an input degree shift. For an arbitrary shift, we propose an algorithm for the computation of an interpolation basis in shifted Popov normal form with a cost of O ~(mω−1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) field operations, where ω\omega is the exponent of matrix multiplication and the notation O ~(⋅)\mathcal{O}\tilde{~}(\cdot) indicates that logarithmic terms are omitted. Earlier works, in the case of Hermite-Pad\'e approximation and in the general interpolation case, compute non-normalized bases. Since for arbitrary shifts such bases may have size Θ(m2σ)\Theta(m^2 \sigma), the cost bound O ~(mω−1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) was feasible only with restrictive assumptions on the shift that ensure small output sizes. The question of handling arbitrary shifts with the same complexity bound was left open. To obtain the target cost for any shift, we strengthen the properties of the output bases, and of those obtained during the course of the algorithm: all the bases are computed in shifted Popov form, whose size is always O(mσ)\mathcal{O}(m \sigma). Then, we design a divide-and-conquer scheme. We recursively reduce the initial interpolation problem to sub-problems with more convenient shifts by first computing information on the degrees of the intermediate bases.Comment: 8 pages, sig-alternate class, 4 figures (problems and algorithms

    Two-Point Codes for the Generalized GK curve

    Get PDF
    We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti-Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti-Korchmaros curve (GK). Our results completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We find several new improvements upon the MinT minimum distance tables.Comment: 13 page

    Computing minimal interpolation bases

    Get PDF
    International audienceWe consider the problem of computing univariate polynomial matrices over afield that represent minimal solution bases for a general interpolationproblem, some forms of which are the vector M-Pad\'e approximation problem in[Van Barel and Bultheel, Numerical Algorithms 3, 1992] and the rationalinterpolation problem in [Beckermann and Labahn, SIAM J. Matrix Anal. Appl. 22,2000]. Particular instances of this problem include the bivariate interpolationsteps of Guruswami-Sudan hard-decision and K\"otter-Vardy soft-decisiondecodings of Reed-Solomon codes, the multivariate interpolation step oflist-decoding of folded Reed-Solomon codes, and Hermite-Pad\'e approximation. In the mentioned references, the problem is solved using iterative algorithmsbased on recurrence relations. Here, we discuss a fast, divide-and-conquerversion of this recurrence, taking advantage of fast matrix computations overthe scalars and over the polynomials. This new algorithm is deterministic, andfor computing shifted minimal bases of relations between mm vectors of sizeσ\sigma it uses O (mω−1(σ+∣s∣))O~( m^{\omega-1} (\sigma + |s|) ) field operations, whereω\omega is the exponent of matrix multiplication, and ∣s∣|s| is the sum of theentries of the input shift ss, with min⁥(s)=0\min(s) = 0. This complexity boundimproves in particular on earlier algorithms in the case of bivariateinterpolation for soft decoding, while matching fastest existing algorithms forsimultaneous Hermite-Pad\'e approximation

    Beating binary powering for polynomial matrices

    Get PDF
    The NNth power of a polynomial matrix of fixed size and degree can be computed by binary powering as fast as multiplying two polynomials of linear degree in NN. When Fast Fourier Transform (FFT) is available, the resulting arithmetic complexity is \emph{softly linear} in NN, i.e. linear in NN with extra logarithmic factors. We show that it is possible to beat binary powering, by an algorithm whose complexity is \emph{purely linear} in NN, even in absence of FFT. The key result making this improvement possible is that the entries of the NNth power of a polynomial matrix satisfy linear differential equations with polynomial coefficients whose orders and degrees are independent of NN. Similar algorithms are proposed for two related problems: computing the NNth term of a C-recursive sequence of polynomials, and modular exponentiation to the power NN for bivariate polynomials
    corecore