5 research outputs found

    Memory-efficient Krylov subspace techniques for solving large-scale Lyapunov equations

    Get PDF
    This paper considers the solution of large-scale Lyapunov matrix equations of the form AX + XA(T) = -bb(T). The Arnoldi method is a simple but sometimes ineffective approach to deal with such equations. One of its major drawbacks is excessive memory consumption caused by slow convergence. To overcome this disadvantage, we propose two-pass Krylov subspace methods, which only compute the solution of the compressed equation in the first pass. The second pass computes the product of the Krylov subspace basis with a low-rank approximation of this solution. For symmetric A, we employ the Lanczos method; for nonsymmetric A, we extend a recently developed restarted Arnoldi method for the approximation of matrix functions. Preliminary numerical experiments reveal that the resulting algorithms require significantly less memory at the expense of extra matrix-vector products

    Compress-and-Restart Block Krylov Subspace Methods for Sylvester Matrix Equations

    No full text
    Block Krylov subspace methods (KSMs) comprise building blocks in many state-of-the-art solvers for large-scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well-explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs

    Métodos numéricos para resolução de equações de Lyapunov

    Get PDF
    O objectivo desta dissertação é descrever, analisar e aplicar alguns métodos numéricos para resolver a equação clássica de Lyapunov. Estudamos condições que garantem a solubilidade das equações e estabelecemos relações entre a fórmula contínua AX + X A* + Q = 0 e a fórmula discreta AX A* − X + Q = 0 . O produto de Kronecker é usado de modo a permitir representações de equações matriciais e o desenvolvimento de alguns métodos numéricos Analisamos algumas decomposições matriciais que vão ser utilizadas no desenvolvimento de alguns métodos numéricos directos nomeadamente Bartels-Stewart e Hessenberg-Schur. Por fim, os subespaço de Krylov e alguns processos de ortogonalização permitem desenvolver os métodos iterativos de Arnoldi e GMRES e os métodos directos de Ward e Kirrinnis
    corecore