128 research outputs found
Hermite form computation of matrices of differential polynomials
Given a matrix A in F(t)[D;\delta]^{n\times n} over the ring of differential polynomials, we first prove the existence of the Hermite form H of A over this ring. Then we determine degree bounds on U and H such that UA=H. Finally, based on the degree bounds on U and H, we compute the Hermite form H of A by reducing the problem to solving a linear system of equations over F(t). The algorithm requires a polynomial number of operations in F in terms of the input sizes: n, deg_{D} A, and deg_{t} A. When F=Q it requires time polynomial in the bit-length of the rational coefficients as well
Computing Matrix Canonical Forms of Ore Polynomials
We present algorithms to compute canonical forms of matrices of Ore polynomials while controlling intermediate expression swell. Given a square non-singular input matrix of Ore polynomials, we give an extension of the algorithm by Labhalla et al. 1992, to compute the Hermite form. We also give a new fraction-free algorithm to compute the Popov form, accompanied by an implementation and experimental results that compare it to the best known algorithms in the literature. Our algorithm is output-sensitive, with a cost that depends on the orthogonality defect of the input matrix: the sum of the row degrees in the input matrix minus the sum of the row degrees in the Popov form. We also use the recent advances in polynomial matrix computations, including fast inversion and rank profile computation, to describe an algorithm that computes the transformation matrix corresponding to the Popov form
Fast Decoding of Codes in the Rank, Subspace, and Sum-Rank Metric
We speed up existing decoding algorithms for three code classes in different
metrics: interleaved Gabidulin codes in the rank metric, lifted interleaved
Gabidulin codes in the subspace metric, and linearized Reed-Solomon codes in
the sum-rank metric. The speed-ups are achieved by reducing the core of the
underlying computational problems of the decoders to one common tool: computing
left and right approximant bases of matrices over skew polynomial rings. To
accomplish this, we describe a skew-analogue of the existing PM-Basis algorithm
for matrices over usual polynomials. This captures the bulk of the work in
multiplication of skew polynomials, and the complexity benefit comes from
existing algorithms performing this faster than in classical quadratic
complexity. The new faster algorithms for the various decoding-related
computational problems are interesting in their own and have further
applications, in particular parts of decoders of several other codes and
foundational problems related to the remainder-evaluation of skew polynomials
Fraction-free algorithm for the computation of diagonal forms matrices over Ore domains using Gr{\"o}bner bases
This paper is a sequel to "Computing diagonal form and Jacobson normal form
of a matrix using Groebner bases", J. of Symb. Computation, 46 (5), 2011. We
present a new fraction-free algorithm for the computation of a diagonal form of
a matrix over a certain non-commutative Euclidean domain over a computable
field with the help of Gr\"obner bases. This algorithm is formulated in a
general constructive framework of non-commutative Ore localizations of
-algebras (OLGAs). We split the computation of a normal form of a matrix
into the diagonalization and the normalization processes. Both of them can be
made fraction-free. For a matrix over an OLGA we provide a diagonalization
algorithm to compute and with fraction-free entries such that
holds and is diagonal. The fraction-free approach gives us more information
on the system of linear functional equations and its solutions, than the
classical setup of an operator algebra with rational functions coefficients. In
particular, one can handle distributional solutions together with, say,
meromorphic ones. We investigate Ore localizations of common operator algebras
over and use them in the unimodularity analysis of transformation
matrices . In turn, this allows to lift the isomorphism of modules over an
OLGA Euclidean domain to a polynomial subring of it. We discuss the relation of
this lifting with the solutions of the original system of equations. Moreover,
we prove some new results concerning normal forms of matrices over non-simple
domains. Our implementation in the computer algebra system {\sc
Singular:Plural} follows the fraction-free strategy and shows impressive
performance, compared with methods which directly use fractions. Since we
experience moderate swell of coefficients and obtain simple transformation
matrices, the method we propose is well suited for solving nontrivial practical
problems.Comment: 25 pages, to appear in Journal of Symbolic Computatio
Computing Popov Forms of Polynomial Matrices
This thesis gives a deterministic algorithm to transform a row reduced matrix to canon-
ical Popov form. Given as input a row reduced matrix R over K[x], K a ïŹeld, our algorithm
computes the Popov form in about the same time as required to multiply together over
K[x] two matrices of the same dimension and degree as R. Randomization can be used to
extend the algorithm for rectangular input matrices of full row rank. Thus we give a Las
Vegas algorithm that computes the Popov decomposition of matrices of full row rank. We also show that the problem of transforming a row reduced matrix to Popov form is at least
as hard as polynomial matrix multiplication
Bit Complexity of Jordan Normal Form and Spectral Factorization
We study the bit complexity of two related fundamental computational problems
in linear algebra and control theory. Our results are: (1) An
time algorithm for
finding an approximation to the Jordan Normal form of an integer
matrix with bit entries, where is the exponent of matrix
multiplication. (2) An
time algorithm for -approximately computing the spectral
factorization of a given monic rational matrix
polynomial of degree with rational bit coefficients having bit
common denominators, which satisfies for all real . The
first algorithm is used as a subroutine in the second one.
Despite its being of central importance, polynomial complexity bounds were
not previously known for spectral factorization, and for Jordan form the best
previous best running time was an unspecified polynomial in of degree at
least twelve \cite{cai1994computing}. Our algorithms are simple and judiciously
combine techniques from numerical and symbolic computation, yielding
significant advantages over either approach by itself.Comment: 19p
- âŠ