693 research outputs found

    Krylov subspace techniques for model reduction and the solution of linear matrix equations

    No full text
    This thesis focuses on the model reduction of linear systems and the solution of large scale linear matrix equations using computationally efficient Krylov subspace techniques. Most approaches for model reduction involve the computation and factorization of large matrices. However Krylov subspace techniques have the advantage that they involve only matrix-vector multiplications in the large dimension, which makes them a better choice for model reduction of large scale systems. The standard Arnoldi/Lanczos algorithms are well-used Krylov techniques that compute orthogonal bases to Krylov subspaces and, by using a projection process on to the Krylov subspace, produce a reduced order model that interpolates the actual system and its derivatives at infinity. An extension is the rational Arnoldi/Lanczos algorithm which computes orthogonal bases to the union of Krylov subspaces and results in a reduced order model that interpolates the actual system and its derivatives at a predefined set of interpolation points. This thesis concentrates on the rational Krylov method for model reduction. In the rational Krylov method an important issue is the selection of interpolation points for which various techniques are available in the literature with different selection criteria. One of these techniques selects the interpolation points such that the approximation satisfies the necessary conditions for H2 optimal approximation. However it is possible to have more than one approximation for which the necessary optimality conditions are satisfied. In this thesis, some conditions on the interpolation points are derived, that enable us to compute all approximations that satisfy the necessary optimality conditions and hence identify the global minimizer to the H2 optimal model reduction problem. It is shown that for an H2 optimal approximation that interpolates at m interpolation points, the interpolation points are the simultaneous solution of m multivariate polynomial equations in m unknowns. This condition reduces to the computation of zeros of a linear system, for a first order approximation. In case of second order approximation the condition is to compute the simultaneous solution of two bivariate polynomial equations. These two cases are analyzed in detail and it is shown that a global minimizer to the H2 optimal model reduction problem can be identified. Furthermore, a computationally efficient iterative algorithm is also proposed for the H2 optimal model reduction problem that converges to a local minimizer. In addition to the effect of interpolation points on the accuracy of the rational interpolating approximation, an ordinary choice of interpolation points may result in a reduced order model that loses the useful properties such as stability, passivity, minimum-phase and bounded real character as well as structure of the actual system. Recently in the literature it is shown that the rational interpolating approximations can be parameterized in terms of a free low dimensional parameter in order to preserve the stability of the actual system in the reduced order approximation. This idea is extended in this thesis to preserve other properties and combinations of them. Also the concept of parameterization is applied to the minimal residual method, two-sided rational Arnoldi method and H2 optimal approximation in order to improve the accuracy of the interpolating approximation. The rational Krylov method has also been used in the literature to compute low rank approximate solutions of the Sylvester and Lyapunov equations, which are useful for model reduction. The approach involves the computation of two set of basis vectors in which each vector is orthogonalized with all previous vectors. This orthogonalization becomes computationally expensive and requires high storage capacity as the number of basis vectors increases. In this thesis, a restart scheme is proposed which restarts without requiring that the new vectors are orthogonal to the previous vectors. Instead, a set of two new orthogonal basis vectors are computed. This reduces the computational burden of orthogonalization and the requirement of storage capacity. It is shown that in case of Lyapunov equations, the approximate solution obtained through the restart scheme approaches monotonically to the actual solution
    • …
    corecore