128 research outputs found

    Hermite form computation of matrices of differential polynomials

    Get PDF
    Given a matrix A in F(t)[D;\delta]^{n\times n} over the ring of differential polynomials, we first prove the existence of the Hermite form H of A over this ring. Then we determine degree bounds on U and H such that UA=H. Finally, based on the degree bounds on U and H, we compute the Hermite form H of A by reducing the problem to solving a linear system of equations over F(t). The algorithm requires a polynomial number of operations in F in terms of the input sizes: n, deg_{D} A, and deg_{t} A. When F=Q it requires time polynomial in the bit-length of the rational coefficients as well

    Computing Matrix Canonical Forms of Ore Polynomials

    Get PDF
    We present algorithms to compute canonical forms of matrices of Ore polynomials while controlling intermediate expression swell. Given a square non-singular input matrix of Ore polynomials, we give an extension of the algorithm by Labhalla et al. 1992, to compute the Hermite form. We also give a new fraction-free algorithm to compute the Popov form, accompanied by an implementation and experimental results that compare it to the best known algorithms in the literature. Our algorithm is output-sensitive, with a cost that depends on the orthogonality defect of the input matrix: the sum of the row degrees in the input matrix minus the sum of the row degrees in the Popov form. We also use the recent advances in polynomial matrix computations, including fast inversion and rank profile computation, to describe an algorithm that computes the transformation matrix corresponding to the Popov form

    Fast Decoding of Codes in the Rank, Subspace, and Sum-Rank Metric

    Get PDF
    We speed up existing decoding algorithms for three code classes in different metrics: interleaved Gabidulin codes in the rank metric, lifted interleaved Gabidulin codes in the subspace metric, and linearized Reed-Solomon codes in the sum-rank metric. The speed-ups are achieved by reducing the core of the underlying computational problems of the decoders to one common tool: computing left and right approximant bases of matrices over skew polynomial rings. To accomplish this, we describe a skew-analogue of the existing PM-Basis algorithm for matrices over usual polynomials. This captures the bulk of the work in multiplication of skew polynomials, and the complexity benefit comes from existing algorithms performing this faster than in classical quadratic complexity. The new faster algorithms for the various decoding-related computational problems are interesting in their own and have further applications, in particular parts of decoders of several other codes and foundational problems related to the remainder-evaluation of skew polynomials

    Fraction-free algorithm for the computation of diagonal forms matrices over Ore domains using Gr{\"o}bner bases

    Full text link
    This paper is a sequel to "Computing diagonal form and Jacobson normal form of a matrix using Groebner bases", J. of Symb. Computation, 46 (5), 2011. We present a new fraction-free algorithm for the computation of a diagonal form of a matrix over a certain non-commutative Euclidean domain over a computable field with the help of Gr\"obner bases. This algorithm is formulated in a general constructive framework of non-commutative Ore localizations of GG-algebras (OLGAs). We split the computation of a normal form of a matrix into the diagonalization and the normalization processes. Both of them can be made fraction-free. For a matrix MM over an OLGA we provide a diagonalization algorithm to compute U,VU,V and DD with fraction-free entries such that UMV=DUMV=D holds and DD is diagonal. The fraction-free approach gives us more information on the system of linear functional equations and its solutions, than the classical setup of an operator algebra with rational functions coefficients. In particular, one can handle distributional solutions together with, say, meromorphic ones. We investigate Ore localizations of common operator algebras over K[x]K[x] and use them in the unimodularity analysis of transformation matrices U,VU,V. In turn, this allows to lift the isomorphism of modules over an OLGA Euclidean domain to a polynomial subring of it. We discuss the relation of this lifting with the solutions of the original system of equations. Moreover, we prove some new results concerning normal forms of matrices over non-simple domains. Our implementation in the computer algebra system {\sc Singular:Plural} follows the fraction-free strategy and shows impressive performance, compared with methods which directly use fractions. Since we experience moderate swell of coefficients and obtain simple transformation matrices, the method we propose is well suited for solving nontrivial practical problems.Comment: 25 pages, to appear in Journal of Symbolic Computatio

    Computing Popov Forms of Polynomial Matrices

    Get PDF
    This thesis gives a deterministic algorithm to transform a row reduced matrix to canon- ical Popov form. Given as input a row reduced matrix R over K[x], K a ïŹeld, our algorithm computes the Popov form in about the same time as required to multiply together over K[x] two matrices of the same dimension and degree as R. Randomization can be used to extend the algorithm for rectangular input matrices of full row rank. Thus we give a Las Vegas algorithm that computes the Popov decomposition of matrices of full row rank. We also show that the problem of transforming a row reduced matrix to Popov form is at least as hard as polynomial matrix multiplication

    Bit Complexity of Jordan Normal Form and Spectral Factorization

    Full text link
    We study the bit complexity of two related fundamental computational problems in linear algebra and control theory. Our results are: (1) An O~(nω+3a+n4a2+nωlog⁥(1/Ï”))\tilde{O}(n^{\omega+3}a+n^4a^2+n^\omega\log(1/\epsilon)) time algorithm for finding an ϔ−\epsilon-approximation to the Jordan Normal form of an integer matrix with a−a-bit entries, where ω\omega is the exponent of matrix multiplication. (2) An O~(n6d6a+n4d4a2+n3d3log⁥(1/Ï”))\tilde{O}(n^6d^6a+n^4d^4a^2+n^3d^3\log(1/\epsilon)) time algorithm for Ï”\epsilon-approximately computing the spectral factorization P(x)=Q∗(x)Q(x)P(x)=Q^*(x)Q(x) of a given monic n×nn\times n rational matrix polynomial of degree 2d2d with rational a−a-bit coefficients having a−a-bit common denominators, which satisfies P(x)âȘ°0P(x)\succeq 0 for all real xx. The first algorithm is used as a subroutine in the second one. Despite its being of central importance, polynomial complexity bounds were not previously known for spectral factorization, and for Jordan form the best previous best running time was an unspecified polynomial in nn of degree at least twelve \cite{cai1994computing}. Our algorithms are simple and judiciously combine techniques from numerical and symbolic computation, yielding significant advantages over either approach by itself.Comment: 19p
    • 

    corecore