22,792 research outputs found
Fast Order Basis and Kernel Basis Computation and Related Problems
In this thesis, we present efficient deterministic algorithms
for polynomial matrix computation problems, including the computation
of order basis, minimal kernel basis, matrix inverse, column basis,
unimodular completion, determinant, Hermite normal form, rank and
rank profile for matrices of univariate polynomials over a field.
The algorithm for kernel basis computation also immediately provides
an efficient deterministic algorithm for solving linear systems. The
algorithm for column basis also gives efficient deterministic algorithms
for computing matrix GCDs, column reduced forms, and Popov normal
forms for matrices of any dimension and any rank.
We reduce all these problems to polynomial matrix multiplications.
The computational costs of our algorithms are then similar to the
costs of multiplying matrices, whose dimensions match the input matrix
dimensions in the original problems, and whose degrees equal the average
column degrees of the original input matrices in most cases. The use
of the average column degrees instead of the commonly used matrix
degrees, or equivalently the maximum column degrees, makes our computational
costs more precise and tighter. In addition, the shifted minimal bases
computed by our algorithms are more general than the standard minimal
bases
Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix
We reduce the problem of computing the rank and a nullspace basis of a
univariate polynomial matrix to polynomial matrix multiplication. For an input
n x n matrix of degree d over a field K we give a rank and nullspace algorithm
using about the same number of operations as for multiplying two matrices of
dimension n and degree d. If the latter multiplication is done in
MM(n,d)=softO(n^omega d) operations, with omega the exponent of matrix
multiplication over K, then the algorithm uses softO(MM(n,d)) operations in K.
The softO notation indicates some missing logarithmic factors. The method is
randomized with Las Vegas certification. We achieve our results in part through
a combination of matrix Hensel high-order lifting and matrix minimal fraction
reconstruction, and through the computation of minimal or small degree vectors
in the nullspace seen as a K[x]-moduleComment: Research Report LIP RR2005-03, January 200
Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix
Given a nonsingular matrix of univariate polynomials over a
field , we give fast and deterministic algorithms to compute its
determinant and its Hermite normal form. Our algorithms use
operations in ,
where is bounded from above by both the average of the degrees of the rows
and that of the columns of the matrix and is the exponent of matrix
multiplication. The soft- notation indicates that logarithmic factors in the
big- are omitted while the ceiling function indicates that the cost is
when . Our algorithms are based
on a fast and deterministic triangularization method for computing the diagonal
entries of the Hermite form of a nonsingular matrix.Comment: 34 pages, 3 algorithm
Asymptotically fast polynomial matrix algorithms for multivariable systems
We present the asymptotically fastest known algorithms for some basic
problems on univariate polynomial matrices: rank, nullspace, determinant,
generic inverse, reduced form. We show that they essentially can be reduced to
two computer algebra techniques, minimal basis computations and matrix fraction
expansion/reconstruction, and to polynomial matrix multiplication. Such
reductions eventually imply that all these problems can be solved in about the
same amount of time as polynomial matrix multiplication
Computing Multidimensional Persistence
The theory of multidimensional persistence captures the topology of a
multifiltration -- a multiparameter family of increasing spaces.
Multifiltrations arise naturally in the topological analysis of scientific
data. In this paper, we give a polynomial time algorithm for computing
multidimensional persistence. We recast this computation as a problem within
computational algebraic geometry and utilize algorithms from this area to solve
it. While the resulting problem is Expspace-complete and the standard
algorithms take doubly-exponential time, we exploit the structure inherent
withing multifiltrations to yield practical algorithms. We implement all
algorithms in the paper and provide statistical experiments to demonstrate
their feasibility.Comment: This paper has been withdrawn by the authors. Journal of
Computational Geometry, 1(1) 2010, pages 72-100.
http://jocg.org/index.php/jocg/article/view/1
An Improvement over the GVW Algorithm for Inhomogeneous Polynomial Systems
The GVW algorithm is a signature-based algorithm for computing Gr\"obner
bases. If the input system is not homogeneous, some J-pairs with higher
signatures but lower degrees are rejected by GVW's Syzygy Criterion, instead,
GVW have to compute some J-pairs with lower signatures but higher degrees.
Consequently, degrees of polynomials appearing during the computations may
unnecessarily grow up higher and the computation become more expensive. In this
paper, a variant of the GVW algorithm, called M-GVW, is proposed and mutant
pairs are introduced to overcome inconveniences brought by inhomogeneous input
polynomials. Some techniques from linear algebra are used to improve the
efficiency. Both GVW and M-GVW have been implemented in C++ and tested by many
examples from boolean polynomial rings. The timings show M-GVW usually performs
much better than the original GVW algorithm when mutant pairs are found.
Besides, M-GVW is also compared with intrinsic Gr\"obner bases functions on
Maple, Singular and Magma. Due to the efficient routines from the M4RI library,
the experimental results show that M-GVW is very efficient
- …