306 research outputs found
Numerical methods for large-scale Lyapunov equations with symmetric banded data
The numerical solution of large-scale Lyapunov matrix equations with
symmetric banded data has so far received little attention in the rich
literature on Lyapunov equations. We aim to contribute to this open problem by
introducing two efficient solution methods, which respectively address the
cases of well conditioned and ill conditioned coefficient matrices. The
proposed approaches conveniently exploit the possibly hidden structure of the
solution matrix so as to deliver memory and computation saving approximate
solutions. Numerical experiments are reported to illustrate the potential of
the described methods
Residual, restarting and Richardson iteration for the matrix exponential
A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We interpret the sought after vector as a value of a vector function satisfying the linear system of ordinary differential equations (ODE), whose coefficients form the given matrix. The residual is then defined with respect to the initial-value problem for this ODE system. The residual introduced in this way can be seen as a backward error. We show how the residual can efficiently be computed within several iterative methods for the matrix exponential. This completely resolves the question of reliable stopping criteria for these methods. Furthermore, we show that the residual concept can be used to construct new residual-based iterative methods. In particular, a variant of the Richardson method for the new residual appears to provide an efficient way to restart Krylov subspace methods for evaluating the matrix exponential.\u
A posteriori error bounds for the block-Lanczos method for matrix function approximation
We extend the error bounds from [SIMAX, Vol. 43, Iss. 2, pp. 787-811 (2022)]
for the Lanczos method for matrix function approximation to the block
algorithm. Numerical experiments suggest that our bounds are fairly robust to
changing block size and have the potential for use as a practical stopping
criteria. Further experiments work towards a better understanding of how
certain hyperparameters should be chosen in order to maximize the quality of
the error bounds, even in the previously studied block-size one case
- ā¦