115 research outputs found
A Fast Algorithm for the Inversion of Quasiseparable Vandermonde-like Matrices
The results on Vandermonde-like matrices were introduced as a generalization
of polynomial Vandermonde matrices, and the displacement structure of these
matrices was used to derive an inversion formula. In this paper we first
present a fast Gaussian elimination algorithm for the polynomial
Vandermonde-like matrices. Later we use the said algorithm to derive fast
inversion algorithms for quasiseparable, semiseparable and well-free
Vandermonde-like matrices having complexity. To do so we
identify structures of displacement operators in terms of generators and the
recurrence relations(2-term and 3-term) between the columns of the basis
transformation matrices for quasiseparable, semiseparable and well-free
polynomials. Finally we present an algorithm to compute the
inversion of quasiseparable Vandermonde-like matrices
Quasiseparable Hessenberg reduction of real diagonal plus low rank matrices and applications
We present a novel algorithm to perform the Hessenberg reduction of an
matrix of the form where is diagonal with
real entries and and are matrices with . The
algorithm has a cost of arithmetic operations and is based on the
quasiseparable matrix technology. Applications are shown to solving polynomial
eigenvalue problems and some numerical experiments are reported in order to
analyze the stability of the approac
Differential qd algorithm with shifts for rank-structured matrices
Although QR iterations dominate in eigenvalue computations, there are several
important cases when alternative LR-type algorithms may be preferable. In
particular, in the symmetric tridiagonal case where differential qd algorithm
with shifts (dqds) proposed by Fernando and Parlett enjoys often faster
convergence while preserving high relative accuracy (that is not guaranteed in
QR algorithm). In eigenvalue computations for rank-structured matrices QR
algorithm is also a popular choice since, in the symmetric case, the rank
structure is preserved. In the unsymmetric case, however, QR algorithm destroys
the rank structure and, hence, LR-type algorithms come to play once again. In
the current paper we discover several variants of qd algorithms for
quasiseparable matrices. Remarkably, one of them, when applied to Hessenberg
matrices becomes a direct generalization of dqds algorithm for tridiagonal
matrices. Therefore, it can be applied to such important matrices as companion
and confederate, and provides an alternative algorithm for finding roots of a
polynomial represented in the basis of orthogonal polynomials. Results of
preliminary numerical experiments are presented
Multilevel quasiseparable matrices in PDE-constrained optimization
Optimization problems with constraints in the form of a partial differential
equation arise frequently in the process of engineering design. The
discretization of PDE-constrained optimization problems results in large-scale
linear systems of saddle-point type. In this paper we propose and develop a
novel approach to solving such systems by exploiting so-called quasiseparable
matrices. One may think of a usual quasiseparable matrix as of a discrete
analog of the Green's function of a one-dimensional differential operator. Nice
feature of such matrices is that almost every algorithm which employs them has
linear complexity. We extend the application of quasiseparable matrices to
problems in higher dimensions. Namely, we construct a class of preconditioners
which can be computed and applied at a linear computational cost. Their use
with appropriate Krylov methods leads to algorithms of nearly linear
complexity
Solving rank structured Sylvester and Lyapunov equations
We consider the problem of efficiently solving Sylvester and Lyapunov
equations of medium and large scale, in case of rank-structured data, i.e.,
when the coefficient matrices and the right-hand side have low-rank
off-diagonal blocks. This comprises problems with banded data, recently studied
by Haber and Verhaegen in "Sparse solution of the Lyapunov equation for
large-scale interconnected systems", Automatica, 2016, and by Palitta and
Simoncini in "Numerical methods for large-scale Lyapunov equations with
symmetric banded data", SISC, 2018, which often arise in the discretization of
elliptic PDEs.
We show that, under suitable assumptions, the quasiseparable structure is
guaranteed to be numerically present in the solution, and explicit novel
estimates of the numerical rank of the off-diagonal blocks are provided.
Efficient solution schemes that rely on the technology of hierarchical
matrices are described, and several numerical experiments confirm the
applicability and efficiency of the approaches. We develop a MATLAB toolbox
that allows easy replication of the experiments and a ready-to-use interface
for the solvers. The performances of the different approaches are compared, and
we show that the new methods described are efficient on several classes of
relevant problems
Computing with quasiseparable matrices
International audienceThe class of quasiseparable matrices is defined by a pair of bounds, called the quasiseparable orders, on the ranks of the maximal sub-matrices entirely located in their strictly lower and upper triangular parts. These arise naturally in applications, as e.g. the inverse of band matrices, and are widely used for they admit structured representations allowing to compute with them in time linear in the dimension and quadratic with the quasiseparable order. We show, in this paper, the connection between the notion of quasisepa-rability and the rank profile matrix invariant, presented in [Dumas & al. ISSAC'15]. This allows us to propose an algorithm computing the quasiseparable orders (rL, rU) in time O(n^2 s^(ω−2)) where s = max(rL, rU) and ω the exponent of matrix multiplication. We then present two new structured representations, a binary tree of PLUQ decompositions, and the Bruhat generator, using respectively O(ns log n/s) and O(ns) field elements instead of O(ns^2) for the previously known generators. We present algorithms computing these representations in time O(n^2 s^(ω−2)). These representations allow a matrix-vector product in time linear in the size of their representation. Lastly we show how to multiply two such structured matrices in time O(n^2 s^(ω−2))
Fast Hessenberg reduction of some rank structured matrices
We develop two fast algorithms for Hessenberg reduction of a structured
matrix where is a real or unitary diagonal
matrix and . The proposed algorithm for the
real case exploits a two--stage approach by first reducing the matrix to a
generalized Hessenberg form and then completing the reduction by annihilation
of the unwanted sub-diagonals. It is shown that the novel method requires
arithmetic operations and it is significantly faster than other
reduction algorithms for rank structured matrices. The method is then extended
to the unitary plus low rank case by using a block analogue of the CMV form of
unitary matrices. It is shown that a block Lanczos-type procedure for the block
tridiagonalization of induces a structured reduction on in a block
staircase CMV--type shape. Then, we present a numerically stable method for
performing this reduction using unitary transformations and we show how to
generalize the sub-diagonal elimination to this shape, while still being able
to provide a condensed representation for the reduced matrix. In this way the
complexity still remains linear in and, moreover, the resulting algorithm
can be adapted to deal efficiently with block companion matrices.Comment: 25 page
- …