69 research outputs found

    Low-rank approximate inverse for preconditioning tensor-structured linear systems

    Full text link
    In this paper, we propose an algorithm for the construction of low-rank approximations of the inverse of an operator given in low-rank tensor format. The construction relies on an updated greedy algorithm for the minimization of a suitable distance to the inverse operator. It provides a sequence of approximations that are defined as the projections of the inverse operator in an increasing sequence of linear subspaces of operators. These subspaces are obtained by the tensorization of bases of operators that are constructed from successive rank-one corrections. In order to handle high-order tensors, approximate projections are computed in low-rank Hierarchical Tucker subsets of the successive subspaces of operators. Some desired properties such as symmetry or sparsity can be imposed on the approximate inverse operator during the correction step, where an optimal rank-one correction is searched as the tensor product of operators with the desired properties. Numerical examples illustrate the ability of this algorithm to provide efficient preconditioners for linear systems in tensor format that improve the convergence of iterative solvers and also the quality of the resulting low-rank approximations of the solution

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure

    Low-rank updates and a divide-and-conquer method for linear matrix equations

    Get PDF
    Linear matrix equations, such as the Sylvester and Lyapunov equations, play an important role in various applications, including the stability analysis and dimensionality reduction of linear dynamical control systems and the solution of partial differential equations. In this work, we present and analyze a new algorithm, based on tensorized Krylov subspaces, for quickly updating the solution of such a matrix equation when its coefficients undergo low-rank changes. We demonstrate how our algorithm can be utilized to accelerate the Newton method for solving continuous-time algebraic Riccati equations. Our algorithm also forms the basis of a new divide-and-conquer approach for linear matrix equations with coefficients that feature hierarchical low-rank structure, such as HODLR, HSS, and banded matrices. Numerical experiments demonstrate the advantages of divide-and-conquer over existing approaches, in terms of computational time and memory consumption

    On the Convergence of Krylov Methods with Low-Rank Truncations

    No full text

    Rational Krylov for Stieltjes matrix functions: convergence and pole selection

    Full text link
    Evaluating the action of a matrix function on a vector, that is x=f(M)vx=f(\mathcal M)v, is an ubiquitous task in applications. When M\mathcal M is large, one usually relies on Krylov projection methods. In this paper, we provide effective choices for the poles of the rational Krylov method for approximating xx when f(z)f(z) is either Cauchy-Stieltjes or Laplace-Stieltjes (or, which is equivalent, completely monotonic) and M\mathcal M is a positive definite matrix. Relying on the same tools used to analyze the generic situation, we then focus on the case M=I⊗A−BT⊗I\mathcal M=I \otimes A - B^T \otimes I, and vv obtained vectorizing a low-rank matrix; this finds application, for instance, in solving fractional diffusion equation on two-dimensional tensor grids. We see how to leverage tensorized Krylov subspaces to exploit the Kronecker structure and we introduce an error analysis for the numerical approximation of xx. Pole selection strategies with explicit convergence bounds are given also in this case

    Krylov subspace methods for linear systems with tensor product structure

    Get PDF
    The numerical solution of linear systems with certain tensor product structures is considered. Such structures arise, for example, from the finite element discretization of a linear PDE on a d-dimensional hypercube. Linear systems with tensor product structure can be regarded as linear matrix equations for d = 2 and appear to be their most natural extension for d ≄ 2. A standard Krylov subspace method applied to such a linear system suffers from the curse of dimensionality and has a computational cost that grows exponentially with d. The key to breaking the curse is to note that the solution can often be very well approximated by a vector of low tensor rank. We propose and analyze a new class of methods, so-called tensor Krylov subspace methods, which exploit this fact and attain a computational cost that grows linearly with d. Copyright © 2010 Society for Industrial and Applied Mathematics

    An optimality property of an approximated solution computed by the Hessenberg method

    Get PDF
    We revisit the implementation of the Krylov subspace method based on the Hessenberg process for general linear operator equations. It is established that at each step, the computed approximate solution can be regarded by the corresponding approach as the minimizer of a certain norm of residual corresponding to the obtained approximate solution of the system. Test problems are numerically examined for solving tensor equations with a cosine transform product arising from image restoration to compare the performance of the Krylov subspace methods in conjunction with the Tikhonov regularization technique based on Hessenberg and Arnoldi processes

    A nested divide-and-conquer method for tensor Sylvester equations with positive definite hierarchically semiseparable coefficients

    Full text link
    Linear systems with a tensor product structure arise naturally when considering the discretization of Laplace type differential equations or, more generally, multidimensional operators with separable coefficients. In this work, we focus on the numerical solution of linear systems of the form (I⊗⋯⊗I⊗A1+⋯+Ad⊗I⊗⋯⊗I)x=b, \left(I\otimes \dots\otimes I \otimes A_1+\dots + A_d\otimes I \otimes\dots \otimes I\right)x=b, where the matrices At∈Rn×nA_t\in\mathbb R^{n\times n} are symmetric positive definite and belong to the class of hierarchically semiseparable matrices. We propose and analyze a nested divide-and-conquer scheme, based on the technology of low-rank updates, that attains the quasi-optimal computational cost O(nd(log⁥(n)+log⁥(Îș)2+log⁥(Îș)log⁥(ϔ−1)))\mathcal O(n^d (\log(n) + \log(\kappa)^2 + \log(\kappa) \log(\epsilon^{-1}))) where Îș\kappa is the condition number of the linear system, and Ï”\epsilon the target accuracy. Our theoretical analysis highlights the role of inexactness in the nested calls of our algorithm and provides worst case estimates for the amplification of the residual norm. The performances are validated on 2D and 3D case studies
    • 

    corecore