8,422 research outputs found

    Solving rank structured Sylvester and Lyapunov equations

    Full text link
    We consider the problem of efficiently solving Sylvester and Lyapunov equations of medium and large scale, in case of rank-structured data, i.e., when the coefficient matrices and the right-hand side have low-rank off-diagonal blocks. This comprises problems with banded data, recently studied by Haber and Verhaegen in "Sparse solution of the Lyapunov equation for large-scale interconnected systems", Automatica, 2016, and by Palitta and Simoncini in "Numerical methods for large-scale Lyapunov equations with symmetric banded data", SISC, 2018, which often arise in the discretization of elliptic PDEs. We show that, under suitable assumptions, the quasiseparable structure is guaranteed to be numerically present in the solution, and explicit novel estimates of the numerical rank of the off-diagonal blocks are provided. Efficient solution schemes that rely on the technology of hierarchical matrices are described, and several numerical experiments confirm the applicability and efficiency of the approaches. We develop a MATLAB toolbox that allows easy replication of the experiments and a ready-to-use interface for the solvers. The performances of the different approaches are compared, and we show that the new methods described are efficient on several classes of relevant problems

    On the ADI method for the Sylvester Equation and the optimal-H2\mathcal{H}_2 points

    Full text link
    The ADI iteration is closely related to the rational Krylov projection methods for constructing low rank approximations to the solution of Sylvester equation. In this paper we show that the ADI and rational Krylov approximations are in fact equivalent when a special choice of shifts are employed in both methods. We will call these shifts pseudo H2-optimal shifts. These shifts are also optimal in the sense that for the Lyapunov equation, they yield a residual which is orthogonal to the rational Krylov projection subspace. Via several examples, we show that the pseudo H2-optimal shifts consistently yield nearly optimal low rank approximations to the solutions of the Lyapunov equations

    Low-rank updates and a divide-and-conquer method for linear matrix equations

    Get PDF
    Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as HODLR, HSS, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption

    On an integrated Krylov-ADI solver for large-scale Lyapunov equations

    Get PDF
    One of the most computationally expensive steps of the low-rank ADI method for large-scale Lyapunov equations is the solution of a shifted linear system at each iteration. We propose the use of the extended Krylov subspace method for this task. In particular, we illustrate how a single approximation space can be constructed to solve all the shifted linear systems needed to achieve a prescribed accuracy in terms of Lyapunov residual norm. Moreover, we show how to fully merge the two iterative procedures in order to obtain a novel, efcient implementation of the low-rank ADI method, for an important class of equations. Many state-of-the-art algorithms for the shift computation can be easily incorporated into our new scheme, as well. Several numerical results illustrate the potential of our novel procedure when compared to an implementation of the low-rank ADI method based on sparse direct solvers for the shifted linear systems

    An inverse-free ADI algorithm for computing Lagrangian invariant subspaces

    Get PDF
    Summary: The numerical computation of Lagrangian invariant subspaces of large-scale Hamiltonian matrices is discussed in the context of the solution of Lyapunov equations. A new version of the low-rank alternating direction implicit method is introduced, which, in order to avoid numerical difficulties with solutions that are of very large norm, uses an inverse-free representation of the subspace and avoids inverses of ill-conditioned matrices. It is shown that this prevents large growth of the elements of the solution that may destroy a low-rank approximation of the solution. A partial error analysis is presented, and the behavior of the method is demonstrated via several numerical examples. Copyrigh

    Convergent Snapshot Algorithms for Infinite-Dimensional Lyapunov Equations

    Get PDF
    We consider two algorithms to approximate the solution Z of a class of stable operator Lyapunov equations of the form AZ + ZA* + BB* = 0. The algorithms utilize time snapshots of solutions of certain linear infinite-dimensional differential equations to construct the approximations. Matrix approximations of the operators a and B are not required and the algorithms are applicable as long as the rank of B is relatively small. The first algorithm produces an optimal low-rank approximate solution using proper orthogonal decomposition. The second algorithm approximates the product of the solution with a few vectors and can be implemented with a minimal amount of storage. Both algorithms are known for the matrix case. However, the extension of the algorithms to infinite dimensions appears to be new. We establish easily verifiable convergence theory and a priori error bounds for both algorithms and present numerical results for two model problems
    • …
    corecore