53 research outputs found
KRYLOV SUBSPACE METHODS FOR SOLVING LARGE LYAPUNOV EQUATIONS
Published versio
Factorized solution of generalized stable Sylvester equations using many-core GPU accelerators
[EN] We investigate the factorized solution of generalized stable Sylvester equations such as those arising in model reduction, image restoration, and observer design. Our algorithms, based on the matrix sign function, take advantage of the current trend to integrate high performance graphics accelerators (also known as GPUs) in computer systems. As a result, our realisations provide a valuable tool to solve large-scale problems on a variety of platforms.We acknowledge support of the ANII - MPG Independent Research Group: "Efficient Hetergenous Computing" at UdelaR, a partner group of the Max Planck Institute in Magdeburg.Benner, P.; Dufrechou, E.; Ezzatti, P.; Gallardo, R.; Quintana-Ortí, ES. (2021). Factorized solution of generalized stable Sylvester equations using many-core GPU accelerators. The Journal of Supercomputing (Online). 77(9):10152-19164. https://doi.org/10.1007/s11227-021-03658-y101521916477
A parallel Schur method for solving continuous-time algebraic Riccati equations
Numerical algorithms for solving the continuous-time algebraic Riccati matrix equation on a distributed memory parallel computer are considered. In particular, it is shown that the Schur method, based on computing the stable invariant subspace of a Hamiltonian matrix, can be parallelized in an efficient and scalable way. Our implementation employs the state-of-the-art library ScaLAPACK as well as recently developed parallel methods for reordering the eigenvalues in a real Schur form. Some experimental results are presented, confirming the scalability of our implementation and comparing it with an existing implementation of the matrix sign iteration from the PLiCOC library
Krylov methods for large-scale modern problems in numerical linear algebra
Large-scale problems have attracted much attention in the last decades since
they arise from different applications in several fields. Moreover, the matrices that
are involved in those problems are often sparse, this is, the majority of their entries
are zero. Around 40 years ago, the most common problems related to large-scale and
sparse matrices consisted in solving linear systems, finding eigenvalues and/or eigenvectors,
solving least square problems or computing singular value decompositions.
However, in the last years, large-scale and sparse problems of different natures have
appeared, motivating and challenging numerical linear algebra to develop effective
and efficient algorithms to solve them.
Common difficulties that appear during the development of algorithms for solving
modern large-scale problems are related to computational costs, storage issues and
CPU time, given the large size of the matrices, which indicate that direct methods
can not be used. This suggests that projection methods based on Krylov subspaces
are a good option to develop procedures for solving large-scale and sparse modern
problems.
In this PhD Thesis we develop novel and original algorithms for solving two
large-scale modern problems in numerical linear algebra: first, we introduce the
R-CORK method for solving rational eigenvalue problems and, second, we present
projection methods to compute the solution of T-Sylvester matrix equations, both
based on Krylov subspaces.
The R-CORK method is an extension of the compact rational Krylov method
(CORK) [104] introduced to solve a family of nonlinear eigenvalue problems that can
be expressed and linearized in certain particular ways and which include arbitrary
polynomial eigenvalue problems, but not arbitrary rational eigenvalue problems.
The R-CORK method exploits the structure of the linearized problem by representing
the Krylov vectors in a compact form in order to reduce the cost of storage,
resulting in a method with two levels of orthogonalization. The first level of orthogonalization
works with vectors of the same size as the original problem, and the
second level works with vectors of size much smaller than the original problem. Since
vectors of the size of the linearization are never stored or orthogonalized, R-CORK
is more efficient from the point of view of memory and orthogonalization costs than
the classical rational Krylov method applied to the linearization. Moreover, since
the R-CORK method is based on a classical rational Krylov method, the implementation
of implicit restarting is possible and we present an efficient way to do it, that
preserves the compact representation of the Krylov vectors.
We also introduce in this dissertation projection methods for solving the TSylvester
equation, which has recently attracted considerable attention as a consequence
of its close relation to palindromic eigenvalue problems and other applications.
The theory concerning T-Sylvester equations is rather well understood, and before the work in this thesis, there were stable and efficient numerical algorithms
to solve these matrix equations for small- to medium- sized matrices. However,
developing numerical algorithms for solving large-scale T-Sylvester equations was a
completely open problem. In this thesis, we introduce several projection methods
based on block Krylov subspaces and extended block Krylov subspaces for solving
the T-Sylvester equation when the right-hand side is a low-rank matrix. We also offer
an intuition on the expected convergence of the algorithm based on block Krylov
subspaces and a clear guidance on which algorithm is the most convenient to use in
each situation.
All the algorithms presented in this thesis have been extensively tested, and the
reported numerical results show that they perform satisfactorily in practice.Adicionalmente se recibió ayuda parcial de los proyectos de investigación: “Structured Numerical Linear Algebra: Matrix
Polynomials, Special Matrices, and Conditioning” (Ministerio de Economía y Competitividad de España, Número
de proyecto: MTM2012-32542) y “Structured Numerical Linear Algebra for Constant, Polynomial and Rational Matrices” (Ministerio de Economía y Competitividad de España,
Número de proyecto: MTM2015-65798-P), donde el investigador principal de ambos proyectos fue Froilán Martínez
Dopico.Programa Oficial de Doctorado en Ingeniería MatemáticaPresidente: José Mas Marí.- Secretario: Fernando de Terán Vergara.- Vocal: José Enrique Román Molt
The Dynamical Functional Particle Method for Multi-Term Linear Matrix Equations
Recent years have seen a renewal of interest in multi-term linear matrix equations, as these have come to play a role in a number of important applications. Here, we consider the solution of such equations by means of the dynamical functional particle method, an iterative technique that relies on the numerical integration of a damped second order dynamical system. We develop a new algorithm for the solution of a large class of these equations, a class that includes, among others, all linear matrix equations with Hermitian positive definite or negative definite coefficients. In numerical experiments, our MATLAB implementation outperforms existing methods for the solution of multi-term Sylvester equations. For the Sylvester equation AX + XB = C, in particular, it can be faster and more accurate than the built-in implementation of the Bartels–Stewart algorithm, when A and B are well conditioned and have very different size
The automatic solution of partial differential equations using a global spectral method
A spectral method for solving linear partial differential equations (PDEs)
with variable coefficients and general boundary conditions defined on
rectangular domains is described, based on separable representations of partial
differential operators and the one-dimensional ultraspherical spectral method.
If a partial differential operator is of splitting rank , such as the
operator associated with Poisson or Helmholtz, the corresponding PDE is solved
via a generalized Sylvester matrix equation, and a bivariate polynomial
approximation of the solution of degree is computed in
operations. Partial differential operators of
splitting rank are solved via a linear system involving a block-banded
matrix in operations. Numerical
examples demonstrate the applicability of our 2D spectral method to a broad
class of PDEs, which includes elliptic and dispersive time-evolution equations.
The resulting PDE solver is written in MATLAB and is publicly available as part
of CHEBFUN. It can resolve solutions requiring over a million degrees of
freedom in under seconds. An experimental implementation in the Julia
language can currently perform the same solve in seconds.Comment: 22 page
Fast linear algebra is stable
In an earlier paper, we showed that a large class of fast recursive matrix
multiplication algorithms is stable in a normwise sense, and that in fact if
multiplication of -by- matrices can be done by any algorithm in
operations for any , then it can be done
stably in operations for any . Here we extend
this result to show that essentially all standard linear algebra operations,
including LU decomposition, QR decomposition, linear equation solving, matrix
inversion, solving least squares problems, (generalized) eigenvalue problems
and the singular value decomposition can also be done stably (in a normwise
sense) in operations.Comment: 26 pages; final version; to appear in Numerische Mathemati
- …