272 research outputs found

    Fast computation of the matrix exponential for a Toeplitz matrix

    Full text link
    The computation of the matrix exponential is a ubiquitous operation in numerical mathematics, and for a general, unstructured nĂ—nn\times n matrix it can be computed in O(n3)\mathcal{O}(n^3) operations. An interesting problem arises if the input matrix is a Toeplitz matrix, for example as the result of discretizing integral equations with a time invariant kernel. In this case it is not obvious how to take advantage of the Toeplitz structure, as the exponential of a Toeplitz matrix is, in general, not a Toeplitz matrix itself. The main contribution of this work are fast algorithms for the computation of the Toeplitz matrix exponential. The algorithms have provable quadratic complexity if the spectrum is real, or sectorial, or more generally, if the imaginary parts of the rightmost eigenvalues do not vary too much. They may be efficient even outside these spectral constraints. They are based on the scaling and squaring framework, and their analysis connects classical results from rational approximation theory to matrices of low displacement rank. As an example, the developed methods are applied to Merton's jump-diffusion model for option pricing

    Fast computation of spectral projectors of banded matrices

    Full text link
    We consider the approximate computation of spectral projectors for symmetric banded matrices. While this problem has received considerable attention, especially in the context of linear scaling electronic structure methods, the presence of small relative spectral gaps challenges existing methods based on approximate sparsity. In this work, we show how a data-sparse approximation based on hierarchical matrices can be used to overcome this problem. We prove a priori bounds on the approximation error and propose a fast algo- rithm based on the QDWH algorithm, along the works by Nakatsukasa et al. Numerical experiments demonstrate that the performance of our algorithm is robust with respect to the spectral gap. A preliminary Matlab implementation becomes faster than eig already for matrix sizes of a few thousand.Comment: 27 pages, 10 figure

    A block Newton method for nonlinear eigenvalue problems

    Get PDF
    We consider matrix eigenvalue problems that are nonlinear in the eigenvalue parameter. One of the most fundamental differences from the linear case is that distinct eigenvalues may have linearly dependent eigenvectors or even share the same eigenvector. This has been a severe hindrance in the development of general numerical schemes for computing several eigenvalues of a nonlinear eigenvalue problem, either simultaneously or subsequently. The purpose of this work is to show that the concept of invariant pairs offers a way of representing eigenvalues and eigenvectors that is insensitive to this phenomenon. To demonstrate the use of this concept in the development of numerical methods, we have developed a novel block Newton method for computing such invariant pairs. Algorithmic aspects of this method are considered and a few academic examples demonstrate its viabilit

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure

    Low-rank updates and a divide-and-conquer method for linear matrix equations

    Get PDF
    Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as HODLR, HSS, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption
    • …
    corecore