7 research outputs found
Block Tridiagonal Reduction of Perturbed Normal and Rank Structured Matrices
It is well known that if a matrix solves the
matrix equation , where is a linear bivariate polynomial,
then is normal; and can be simultaneously reduced in a finite
number of operations to tridiagonal form by a unitary congruence and, moreover,
the spectrum of is located on a straight line in the complex plane. In this
paper we present some generalizations of these properties for almost normal
matrices which satisfy certain quadratic matrix equations arising in the study
of structured eigenvalue problems for perturbed Hermitian and unitary matrices.Comment: 13 pages, 3 figure
Multilevel quasiseparable matrices in PDE-constrained optimization
Optimization problems with constraints in the form of a partial differential
equation arise frequently in the process of engineering design. The
discretization of PDE-constrained optimization problems results in large-scale
linear systems of saddle-point type. In this paper we propose and develop a
novel approach to solving such systems by exploiting so-called quasiseparable
matrices. One may think of a usual quasiseparable matrix as of a discrete
analog of the Green's function of a one-dimensional differential operator. Nice
feature of such matrices is that almost every algorithm which employs them has
linear complexity. We extend the application of quasiseparable matrices to
problems in higher dimensions. Namely, we construct a class of preconditioners
which can be computed and applied at a linear computational cost. Their use
with appropriate Krylov methods leads to algorithms of nearly linear
complexity
Differential qd algorithm with shifts for rank-structured matrices
Although QR iterations dominate in eigenvalue computations, there are several
important cases when alternative LR-type algorithms may be preferable. In
particular, in the symmetric tridiagonal case where differential qd algorithm
with shifts (dqds) proposed by Fernando and Parlett enjoys often faster
convergence while preserving high relative accuracy (that is not guaranteed in
QR algorithm). In eigenvalue computations for rank-structured matrices QR
algorithm is also a popular choice since, in the symmetric case, the rank
structure is preserved. In the unsymmetric case, however, QR algorithm destroys
the rank structure and, hence, LR-type algorithms come to play once again. In
the current paper we discover several variants of qd algorithms for
quasiseparable matrices. Remarkably, one of them, when applied to Hessenberg
matrices becomes a direct generalization of dqds algorithm for tridiagonal
matrices. Therefore, it can be applied to such important matrices as companion
and confederate, and provides an alternative algorithm for finding roots of a
polynomial represented in the basis of orthogonal polynomials. Results of
preliminary numerical experiments are presented
Computing with quasiseparable matrices
International audienceThe class of quasiseparable matrices is defined by a pair of bounds, called the quasiseparable orders, on the ranks of the maximal sub-matrices entirely located in their strictly lower and upper triangular parts. These arise naturally in applications, as e.g. the inverse of band matrices, and are widely used for they admit structured representations allowing to compute with them in time linear in the dimension and quadratic with the quasiseparable order. We show, in this paper, the connection between the notion of quasisepa-rability and the rank profile matrix invariant, presented in [Dumas & al. ISSAC'15]. This allows us to propose an algorithm computing the quasiseparable orders (rL, rU) in time O(n^2 s^(ω−2)) where s = max(rL, rU) and ω the exponent of matrix multiplication. We then present two new structured representations, a binary tree of PLUQ decompositions, and the Bruhat generator, using respectively O(ns log n/s) and O(ns) field elements instead of O(ns^2) for the previously known generators. We present algorithms computing these representations in time O(n^2 s^(ω−2)). These representations allow a matrix-vector product in time linear in the size of their representation. Lastly we show how to multiply two such structured matrices in time O(n^2 s^(ω−2))