12 research outputs found
Efficient Computation of the Characteristic Polynomial
This article deals with the computation of the characteristic polynomial of
dense matrices over small finite fields and over the integers. We first present
two algorithms for the finite fields: one is based on Krylov iterates and
Gaussian elimination. We compare it to an improvement of the second algorithm
of Keller-Gehrig. Then we show that a generalization of Keller-Gehrig's third
algorithm could improve both complexity and computational time. We use these
results as a basis for the computation of the characteristic polynomial of
integer matrices. We first use early termination and Chinese remaindering for
dense matrices. Then a probabilistic approach, based on integer minimal
polynomial and Hensel factorization, is particularly well suited to sparse
and/or structured matrices
Computing the Kalman form
We present two algorithms for the computation of the Kalman form of a linear
control system. The first one is based on the technique developed by
Keller-Gehrig for the computation of the characteristic polynomial. The cost is
a logarithmic number of matrix multiplications. To our knowledge, this improves
the best previously known algebraic complexity by an order of magnitude. Then
we also present a cubic algorithm proven to more efficient in practice.Comment: 10 page
An introspective algorithm for the integer determinant
We present an algorithm computing the determinant of an integer matrix A. The
algorithm is introspective in the sense that it uses several distinct
algorithms that run in a concurrent manner. During the course of the algorithm
partial results coming from distinct methods can be combined. Then, depending
on the current running time of each method, the algorithm can emphasize a
particular variant. With the use of very fast modular routines for linear
algebra, our implementation is an order of magnitude faster than other existing
implementations. Moreover, we prove that the expected complexity of our
algorithm is only O(n^3 log^{2.5}(n ||A||)) bit operations in the dense case
and O(Omega n^{1.5} log^2(n ||A||) + n^{2.5}log^3(n||A||)) in the sparse case,
where ||A|| is the largest entry in absolute value of the matrix and Omega is
the cost of matrix-vector multiplication in the case of a sparse matrix.Comment: Published in Transgressive Computing 2006, Grenade : Espagne (2006
Exact Algorithms for Computing Generalized Eigenspaces of Matrices via Annihilating Polynomials
An effective exact method is proposed for computing generalized eigenspaces
of a matrix of integers or rational numbers. Keys of our approach are the use
of minimal annihilating polynomials and the concept of the Jourdan-Krylov
basis. A new method, called Jordan-Krylov elimination, is introduced to design
an algorithm for computing Jordan-Krylov basis. The resulting algorithm outputs
generalized eigenspaces as a form of Jordan chains. Notably, in the output,
components of generalized eigenvectors are expressed as polynomials in the
associated eigenvalue as a variable
Forest Generating Functions of Directed Graphs
A spanning forest polynomial is a multivariate generating function whose variables are indexed over both the vertex and edge sets of a given directed graph. In this thesis, we establish a general framework to study spanning forest polynomials, associating them with a generalized Laplacian matrix and studying its properties. We introduce a novel proof of the famous matrix-tree theorem and show how this extends to a parametric generalization of the all-minors matrix-forest theorem. As an application, we derive explicit formulas for the recently introduced class of directed threshold graphs.
We prove that multivariate forest polynomials are, in general, irreducible and we define a number of specializations that may be compactly expressed in terms of various factors. A specialization in this context is an identification of some of the variables of the polynomial, for example evaluating f(x,y,z) as f(x,x,z). This allows us to derive results that generalize and extend many known properties of the traditional Laplacian matrix in algebraic graph theory.
We analyze the connection between the matrix algebra generated by the traditional Laplacian matrix and certain matrices of forest polynomials. Using this analysis, we derive explicit formulas for these matrices in the cases of Cartesian products of complete graphs and de Bruijn graphs. More generally, we derive an explicit formula relating spanning forest polynomials of a graph to the numbers of D-lazy walks in the graph. These are walks that may choose to remain at a given vertex if that vertex is not of maximum degree D.
This leads us to the study of externally equitable partitions (EEPs), which are objects of recent interest in the control theory literature. We prove that for graphs with EEPs satisfying an additional criteria, the specialized forest polynomials may be factored into a product of forest polynomials of related quotient graphs. We apply this theorem to complete multipartite graphs, hypercube graphs, directed line graphs, and others