285 research outputs found

    Nonsymmetric algebraic Riccati equations associated with an M-matrix: recent advances and algorithms

    Get PDF
    We survey on theoretical properties and algorithms concerning the problem of solving a nonsymmetric algebraic Riccati equation, and we report on some known methods and new algorithmic advances. In particular, some results on the number of positive solutions are proved and a careful convergence analysis of Newton\u27s iteration is carried out in the cases of interest where some singularity conditions are encountered. From this analysis we determine initial approximations which still guarantee the quadratic convergence

    Symmetric Tensor Decomposition by an Iterative Eigendecomposition Algorithm

    Get PDF
    We present an iterative algorithm, called the symmetric tensor eigen-rank-one iterative decomposition (STEROID), for decomposing a symmetric tensor into a real linear combination of symmetric rank-1 unit-norm outer factors using only eigendecompositions and least-squares fitting. Originally designed for a symmetric tensor with an order being a power of two, STEROID is shown to be applicable to any order through an innovative tensor embedding technique. Numerical examples demonstrate the high efficiency and accuracy of the proposed scheme even for large scale problems. Furthermore, we show how STEROID readily solves a problem in nonlinear block-structured system identification and nonlinear state-space identification

    Iterative and doubling algorithms for Riccati-type matrix equations: a comparative introduction

    Full text link
    We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of \emph{doubling}: they construct the iterate Qk=X2kQ_k = X_{2^k} of another naturally-arising fixed-point iteration (Xh)(X_h) via a sort of repeated squaring. The equations we consider are Stein equations X−A∗XA=QX - A^*XA=Q, Lyapunov equations A∗X+XA+Q=0A^*X+XA+Q=0, discrete-time algebraic Riccati equations X=Q+A∗X(I+GX)−1AX=Q+A^*X(I+GX)^{-1}A, continuous-time algebraic Riccati equations Q+A∗X+XA−XGX=0Q+A^*X+XA-XGX=0, palindromic quadratic matrix equations A+QY+A∗Y2=0A+QY+A^*Y^2=0, and nonlinear matrix equations X+A∗X−1A=QX+A^*X^{-1}A=Q. We draw comparisons among these algorithms, highlight the connections between them and to other algorithms such as subspace iteration, and discuss open issues in their theory.Comment: Review article for GAMM Mitteilunge

    A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    Get PDF
    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches

    Numerical Solution of Linear and Nonlinear Eigenvalue Problems

    Get PDF
    Given a real parameter-dependent matrix, we obtain an algorithm for computing the value of the parameter and corresponding eigenvalue for which two eigenvalues of the matrix coalesce to form a 2-dimensional Jordan block. Our algorithms are based on extended versions of the implicit determinant method of Spence and Poulton [55]. We consider when the eigenvalue is both real and complex, which results in solving systems of nonlinear equations by Newton’s or the Gauss-Newton method. Our algorithms rely on good initial guesses, but if these are available, we obtain quadratic convergence. Next, we describe two quadratically convergent algorithms for computing a nearby defective matrix which are cheaper than already known ones. The first approach extends the implicit determinant method in [55] to find parameter values for which a certain Hermitian matrix is singular subject to a constraint. This results in using Newton’s method to solve a real system of three nonlinear equations. The second approach involves simply writing down all the nonlinear equations and solving a real over-determined system using the Gauss-Newton method. We only consider the case where the nearest defective matrix is real. Finally, we consider the computation of an algebraically simple complex eigenpair of a nonsymmetric matrix where the eigenvector is normalised using the natural 2-norm, which produces only a single real normalising equation. We obtain an under-determined system of nonlinear equations which is solved by the Gauss-Newton method. We show how to obtain an equivalent square linear system of equations for the computation of the desired eigenpairs. This square system is exactly what would have been obtained if we had ignored the non uniqueness and nondifferentiability of the normalisation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

    Get PDF
    Many methods for computing eigenvalues of a large sparse matrix involve shift-invert transformations which require the solution of a shifted linear system at each step. This thesis deals with shift-invert iterative techniques for solving eigenvalue problems where the arising linear systems are solved inexactly using a second iterative technique. This approach leads to an inner-outer type algorithm. We provide convergence results for the outer iterative eigenvalue computation as well as techniques for efficient inner solves. In particular eigenvalue computations using inexact inverse iteration, the Jacobi-Davidson method without subspace expansion and the shift-invert Arnoldi method as a subspace method are investigated in detail. A general convergence result for inexact inverse iteration for the non-Hermitian generalised eigenvalue problem is given, using only minimal assumptions. This convergence result is obtained in two different ways; on the one hand, we use an equivalence result between inexact inverse iteration applied to the generalised eigenproblem and modified Newton's method; on the other hand, a splitting method is used which generalises the idea of orthogonal decomposition. Both approaches also include an analysis for the convergence theory of a version of inexact Jacobi-Davidson method, where equivalences between Newton's method, inverse iteration and the Jacobi-Davidson method are exploited. To improve the efficiency of the inner iterative solves we introduce a new tuning strategy which can be applied to any standard preconditioner. We give a detailed analysis on this new preconditioning idea and show how the number of iterations for the inner iterative method and hence the total number of iterations can be reduced significantly by the application of this tuning strategy. The analysis of the tuned preconditioner is carried out for both Hermitian and non-Hermitian eigenproblems. We show how the preconditioner can be implemented efficiently and illustrate its performance using various numerical examples. An equivalence result between the preconditioned simplified Jacobi-Davidson method and inexact inverse iteration with the tuned preconditioner is given. Finally, we discuss the shift-invert Arnoldi method both in the standard and restarted fashion. First, existing relaxation strategies for the outer iterative solves are extended to implicitly restarted Arnoldi's method. Second, we apply the idea of tuning the preconditioner to the inner iterative solve. As for inexact inverse iteration the tuned preconditioner for inexact Arnoldi's method is shown to provide significant savings in the number of inner solves. The theory in this thesis is supported by many numerical examples.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    70 years of Krylov subspace methods: The journey continues

    Full text link
    Using computed examples for the Conjugate Gradient method and GMRES, we recall important building blocks in the understanding of Krylov subspace methods over the last 70 years. Each example consists of a description of the setup and the numerical observations, followed by an explanation of the observed phenomena, where we keep technical details as small as possible. Our goal is to show the mathematical beauty and hidden intricacies of the methods, and to point out some persistent misunderstandings as well as important open problems. We hope that this work initiates further investigations of Krylov subspace methods, which are efficient computational tools and exciting mathematical objects that are far from being fully understood.Comment: 38 page

    Structured Eigenvalue Problems

    Get PDF
    Most eigenvalue problems arising in practice are known to be structured. Structure is often introduced by discretization and linearization techniques but may also be a consequence of properties induced by the original problem. Preserving this structure can help preserve physically relevant symmetries in the eigenvalues of the matrix and may improve the accuracy and efficiency of an eigenvalue computation. The purpose of this brief survey is to highlight these facts for some common matrix structures. This includes a treatment of rather general concepts such as structured condition numbers and backward errors as well as an overview of algorithms and applications for several matrix classes including symmetric, skew-symmetric, persymmetric, block cyclic, Hamiltonian, symplectic and orthogonal matrices
    • …
    corecore