102 research outputs found

    Approximate solutions to large nonsymmetric differential Riccati problems with applications to transport theory

    Full text link
    In the present paper, we consider large scale nonsymmetric differential matrix Riccati equations with low rank right hand sides. These matrix equations appear in many applications such as control theory, transport theory, applied probability and others. We show how to apply Krylov-type methods such as the extended block Arnoldi algorithm to get low rank approximate solutions. The initial problem is projected onto small subspaces to get low dimensional nonsymmetric differential equations that are solved using the exponential approximation or via other integration schemes such as Backward Differentiation Formula (BDF) or Rosenbrok method. We also show how these technique could be easily used to solve some problems from the well known transport equation. Some numerical experiments are given to illustrate the application of the proposed methods to large-scale problem

    Tensorized block rational Krylov methods for tensor Sylvester equations

    Full text link
    We introduce the definition of tensorized block rational Krylov subspaces and its relation with multivariate rational functions, extending the formulation of tensorized Krylov subspaces introduced in [Kressner D., Tobler C., Krylov subspace methods for linear systems with tensor product structure, SIMAX, 2010]. Moreover, we develop methods for the solution of tensor Sylvester equations with low multilinear or Tensor Train rank, based on projection onto a tensor block rational Krylov subspace. We provide a convergence analysis, some strategies for pole selection, and techniques to efficiently compute the residual.Comment: 22 pages, 6 figures, 3 table

    Krylov methods for large-scale modern problems in numerical linear algebra

    Get PDF
    Large-scale problems have attracted much attention in the last decades since they arise from different applications in several fields. Moreover, the matrices that are involved in those problems are often sparse, this is, the majority of their entries are zero. Around 40 years ago, the most common problems related to large-scale and sparse matrices consisted in solving linear systems, finding eigenvalues and/or eigenvectors, solving least square problems or computing singular value decompositions. However, in the last years, large-scale and sparse problems of different natures have appeared, motivating and challenging numerical linear algebra to develop effective and efficient algorithms to solve them. Common difficulties that appear during the development of algorithms for solving modern large-scale problems are related to computational costs, storage issues and CPU time, given the large size of the matrices, which indicate that direct methods can not be used. This suggests that projection methods based on Krylov subspaces are a good option to develop procedures for solving large-scale and sparse modern problems. In this PhD Thesis we develop novel and original algorithms for solving two large-scale modern problems in numerical linear algebra: first, we introduce the R-CORK method for solving rational eigenvalue problems and, second, we present projection methods to compute the solution of T-Sylvester matrix equations, both based on Krylov subspaces. The R-CORK method is an extension of the compact rational Krylov method (CORK) [104] introduced to solve a family of nonlinear eigenvalue problems that can be expressed and linearized in certain particular ways and which include arbitrary polynomial eigenvalue problems, but not arbitrary rational eigenvalue problems. The R-CORK method exploits the structure of the linearized problem by representing the Krylov vectors in a compact form in order to reduce the cost of storage, resulting in a method with two levels of orthogonalization. The first level of orthogonalization works with vectors of the same size as the original problem, and the second level works with vectors of size much smaller than the original problem. Since vectors of the size of the linearization are never stored or orthogonalized, R-CORK is more efficient from the point of view of memory and orthogonalization costs than the classical rational Krylov method applied to the linearization. Moreover, since the R-CORK method is based on a classical rational Krylov method, the implementation of implicit restarting is possible and we present an efficient way to do it, that preserves the compact representation of the Krylov vectors. We also introduce in this dissertation projection methods for solving the TSylvester equation, which has recently attracted considerable attention as a consequence of its close relation to palindromic eigenvalue problems and other applications. The theory concerning T-Sylvester equations is rather well understood, and before the work in this thesis, there were stable and efficient numerical algorithms to solve these matrix equations for small- to medium- sized matrices. However, developing numerical algorithms for solving large-scale T-Sylvester equations was a completely open problem. In this thesis, we introduce several projection methods based on block Krylov subspaces and extended block Krylov subspaces for solving the T-Sylvester equation when the right-hand side is a low-rank matrix. We also offer an intuition on the expected convergence of the algorithm based on block Krylov subspaces and a clear guidance on which algorithm is the most convenient to use in each situation. All the algorithms presented in this thesis have been extensively tested, and the reported numerical results show that they perform satisfactorily in practice.Adicionalmente se recibió ayuda parcial de los proyectos de investigación: “Structured Numerical Linear Algebra: Matrix Polynomials, Special Matrices, and Conditioning” (Ministerio de Economía y Competitividad de España, Número de proyecto: MTM2012-32542) y “Structured Numerical Linear Algebra for Constant, Polynomial and Rational Matrices” (Ministerio de Economía y Competitividad de España, Número de proyecto: MTM2015-65798-P), donde el investigador principal de ambos proyectos fue Froilán Martínez Dopico.Programa Oficial de Doctorado en Ingeniería MatemáticaPresidente: José Mas Marí.- Secretario: Fernando de Terán Vergara.- Vocal: José Enrique Román Molt

    Rational Krylov for Stieltjes matrix functions: convergence and pole selection

    Full text link
    Evaluating the action of a matrix function on a vector, that is x=f(M)vx=f(\mathcal M)v, is an ubiquitous task in applications. When M\mathcal M is large, one usually relies on Krylov projection methods. In this paper, we provide effective choices for the poles of the rational Krylov method for approximating xx when f(z)f(z) is either Cauchy-Stieltjes or Laplace-Stieltjes (or, which is equivalent, completely monotonic) and M\mathcal M is a positive definite matrix. Relying on the same tools used to analyze the generic situation, we then focus on the case M=IABTI\mathcal M=I \otimes A - B^T \otimes I, and vv obtained vectorizing a low-rank matrix; this finds application, for instance, in solving fractional diffusion equation on two-dimensional tensor grids. We see how to leverage tensorized Krylov subspaces to exploit the Kronecker structure and we introduce an error analysis for the numerical approximation of xx. Pole selection strategies with explicit convergence bounds are given also in this case

    Krylov subspace techniques for model reduction and the solution of linear matrix equations

    No full text
    This thesis focuses on the model reduction of linear systems and the solution of large scale linear matrix equations using computationally efficient Krylov subspace techniques. Most approaches for model reduction involve the computation and factorization of large matrices. However Krylov subspace techniques have the advantage that they involve only matrix-vector multiplications in the large dimension, which makes them a better choice for model reduction of large scale systems. The standard Arnoldi/Lanczos algorithms are well-used Krylov techniques that compute orthogonal bases to Krylov subspaces and, by using a projection process on to the Krylov subspace, produce a reduced order model that interpolates the actual system and its derivatives at infinity. An extension is the rational Arnoldi/Lanczos algorithm which computes orthogonal bases to the union of Krylov subspaces and results in a reduced order model that interpolates the actual system and its derivatives at a predefined set of interpolation points. This thesis concentrates on the rational Krylov method for model reduction. In the rational Krylov method an important issue is the selection of interpolation points for which various techniques are available in the literature with different selection criteria. One of these techniques selects the interpolation points such that the approximation satisfies the necessary conditions for H2 optimal approximation. However it is possible to have more than one approximation for which the necessary optimality conditions are satisfied. In this thesis, some conditions on the interpolation points are derived, that enable us to compute all approximations that satisfy the necessary optimality conditions and hence identify the global minimizer to the H2 optimal model reduction problem. It is shown that for an H2 optimal approximation that interpolates at m interpolation points, the interpolation points are the simultaneous solution of m multivariate polynomial equations in m unknowns. This condition reduces to the computation of zeros of a linear system, for a first order approximation. In case of second order approximation the condition is to compute the simultaneous solution of two bivariate polynomial equations. These two cases are analyzed in detail and it is shown that a global minimizer to the H2 optimal model reduction problem can be identified. Furthermore, a computationally efficient iterative algorithm is also proposed for the H2 optimal model reduction problem that converges to a local minimizer. In addition to the effect of interpolation points on the accuracy of the rational interpolating approximation, an ordinary choice of interpolation points may result in a reduced order model that loses the useful properties such as stability, passivity, minimum-phase and bounded real character as well as structure of the actual system. Recently in the literature it is shown that the rational interpolating approximations can be parameterized in terms of a free low dimensional parameter in order to preserve the stability of the actual system in the reduced order approximation. This idea is extended in this thesis to preserve other properties and combinations of them. Also the concept of parameterization is applied to the minimal residual method, two-sided rational Arnoldi method and H2 optimal approximation in order to improve the accuracy of the interpolating approximation. The rational Krylov method has also been used in the literature to compute low rank approximate solutions of the Sylvester and Lyapunov equations, which are useful for model reduction. The approach involves the computation of two set of basis vectors in which each vector is orthogonalized with all previous vectors. This orthogonalization becomes computationally expensive and requires high storage capacity as the number of basis vectors increases. In this thesis, a restart scheme is proposed which restarts without requiring that the new vectors are orthogonal to the previous vectors. Instead, a set of two new orthogonal basis vectors are computed. This reduces the computational burden of orthogonalization and the requirement of storage capacity. It is shown that in case of Lyapunov equations, the approximate solution obtained through the restart scheme approaches monotonically to the actual solution

    Matrix Equation Techniques for Certain Evolutionary Partial Differential Equations

    No full text
    We show that the discrete operator stemming from time-space discretization of evolutionary partial differential equations can be represented in terms of a single Sylvester matrix equation. A novel solution strategy that combines projection techniques with the full exploitation of the entry-wise structure of the involved coefficient matrices is proposed. The resulting scheme is able to efficiently solve problems with a tremendous number of degrees of freedom while maintaining a low storage demand as illustrated in several numerical examples

    On an integrated Krylov-ADI solver for large-scale Lyapunov equations

    Get PDF
    One of the most computationally expensive steps of the low-rank ADI method for large-scale Lyapunov equations is the solution of a shifted linear system at each iteration. We propose the use of the extended Krylov subspace method for this task. In particular, we illustrate how a single approximation space can be constructed to solve all the shifted linear systems needed to achieve a prescribed accuracy in terms of Lyapunov residual norm. Moreover, we show how to fully merge the two iterative procedures in order to obtain a novel, efcient implementation of the low-rank ADI method, for an important class of equations. Many state-of-the-art algorithms for the shift computation can be easily incorporated into our new scheme, as well. Several numerical results illustrate the potential of our novel procedure when compared to an implementation of the low-rank ADI method based on sparse direct solvers for the shifted linear systems
    corecore