4,955 research outputs found

    Order reduction methods for solving large-scale differential matrix Riccati equations

    Full text link
    We consider the numerical solution of large-scale symmetric differential matrix Riccati equations. Under certain hypotheses on the data, reduced order methods have recently arisen as a promising class of solution strategies, by forming low-rank approximations to the sought after solution at selected timesteps. We show that great computational and memory savings are obtained by a reduction process onto rational Krylov subspaces, as opposed to current approaches. By specifically addressing the solution of the reduced differential equation and reliable stopping criteria, we are able to obtain accurate final approximations at low memory and computational requirements. This is obtained by employing a two-phase strategy that separately enhances the accuracy of the algebraic approximation and the time integration. The new method allows us to numerically solve much larger problems than in the current literature. Numerical experiments on benchmark problems illustrate the effectiveness of the procedure with respect to existing solvers

    A numerical comparison of solvers for large-scale, continuous-time algebraic Riccati equations and LQR problems

    Full text link
    In this paper, we discuss numerical methods for solving large-scale continuous-time algebraic Riccati equations. These methods have been the focus of intensive research in recent years, and significant progress has been made in both the theoretical understanding and efficient implementation of various competing algorithms. There are several goals of this manuscript: first, to gather in one place an overview of different approaches for solving large-scale Riccati equations, and to point to the recent advances in each of them. Second, to analyze and compare the main computational ingredients of these algorithms, to detect their strong points and their potential bottlenecks. And finally, to compare the effective implementations of all methods on a set of relevant benchmark examples, giving an indication of their relative performance

    From low-rank approximation to an efficient rational Krylov subspace method for the Lyapunov equation

    Full text link
    We propose a new method for the approximate solution of the Lyapunov equation with rank-11 right-hand side, which is based on extended rational Krylov subspace approximation with adaptively computed shifts. The shift selection is obtained from the connection between the Lyapunov equation, solution of systems of linear ODEs and alternating least squares method for low-rank approximation. The numerical experiments confirm the effectiveness of our approach.Comment: 17 pages, 1 figure

    Order reduction approaches for the algebraic Riccati equation and the LQR problem

    Full text link
    We explore order reduction techniques for solving the algebraic Riccati equation (ARE), and investigating the numerical solution of the linear-quadratic regulator problem (LQR). A classical approach is to build a surrogate low dimensional model of the dynamical system, for instance by means of balanced truncation, and then solve the corresponding ARE. Alternatively, iterative methods can be used to directly solve the ARE and use its approximate solution to estimate quantities associated with the LQR. We propose a class of Petrov-Galerkin strategies that simultaneously reduce the dynamical system while approximately solving the ARE by projection. This methodology significantly generalizes a recently developed Galerkin method by using a pair of projection spaces, as it is often done in model order reduction of dynamical systems. Numerical experiments illustrate the advantages of the new class of methods over classical approaches when dealing with large matrices

    Least Squares Ranking on Graphs

    Full text link
    Given a set of alternatives to be ranked, and some pairwise comparison data, ranking is a least squares computation on a graph. The vertices are the alternatives, and the edge values comprise the comparison data. The basic idea is very simple and old: come up with values on vertices such that their differences match the given edge data. Since an exact match will usually be impossible, one settles for matching in a least squares sense. This formulation was first described by Leake in 1976 for rankingfootball teams and appears as an example in Professor Gilbert Strang's classic linear algebra textbook. If one is willing to look into the residual a little further, then the problem really comes alive, as shown effectively by the remarkable recent paper of Jiang et al. With or without this twist, the humble least squares problem on graphs has far-reaching connections with many current areas ofresearch. These connections are to theoretical computer science (spectral graph theory, and multilevel methods for graph Laplacian systems); numerical analysis (algebraic multigrid, and finite element exterior calculus); other mathematics (Hodge decomposition, and random clique complexes); and applications (arbitrage, and ranking of sports teams). Not all of these connections are explored in this paper, but many are. The underlying ideas are easy to explain, requiring only the four fundamental subspaces from elementary linear algebra. One of our aims is to explain these basic ideas and connections, to get researchers in many fields interested in this topic. Another aim is to use our numerical experiments for guidance on selecting methods and exposing the need for further development.Comment: Added missing references, comparison of linear solvers overhauled, conclusion section added, some new figures adde
    corecore