7,437 research outputs found

    A semi-Lagrangian Vlasov solver in tensor train format

    Full text link
    In this article, we derive a semi-Lagrangian scheme for the solution of the Vlasov equation represented as a low-parametric tensor. Grid-based methods for the Vlasov equation have been shown to give accurate results but their use has mostly been limited to simulations in two dimensional phase space due to extensive memory requirements in higher dimensions. Compression of the solution via high-order singular value decomposition can help in reducing the storage requirements and the tensor train (TT) format provides efficient basic linear algebra routines for low-rank representations of tensors. In this paper, we develop interpolation formulas for a semi-Lagrangian solver in TT format. In order to efficiently implement the method, we propose a compression of the matrix representing the interpolation step and an efficient implementation of the Hadamard product. We show numerical simulations for standard test cases in two, four and six dimensional phase space. Depending on the test case, the memory requirements reduce by a factor 102−10310^2-10^3 in four and a factor 105−10610^5-10^6 in six dimensions compared to the full-grid method

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure

    Low-rank approximate inverse for preconditioning tensor-structured linear systems

    Full text link
    In this paper, we propose an algorithm for the construction of low-rank approximations of the inverse of an operator given in low-rank tensor format. The construction relies on an updated greedy algorithm for the minimization of a suitable distance to the inverse operator. It provides a sequence of approximations that are defined as the projections of the inverse operator in an increasing sequence of linear subspaces of operators. These subspaces are obtained by the tensorization of bases of operators that are constructed from successive rank-one corrections. In order to handle high-order tensors, approximate projections are computed in low-rank Hierarchical Tucker subsets of the successive subspaces of operators. Some desired properties such as symmetry or sparsity can be imposed on the approximate inverse operator during the correction step, where an optimal rank-one correction is searched as the tensor product of operators with the desired properties. Numerical examples illustrate the ability of this algorithm to provide efficient preconditioners for linear systems in tensor format that improve the convergence of iterative solvers and also the quality of the resulting low-rank approximations of the solution

    Towards tensor-based methods for the numerical approximation of the Perron-Frobenius and Koopman operator

    Full text link
    The global behavior of dynamical systems can be studied by analyzing the eigenvalues and corresponding eigenfunctions of linear operators associated with the system. Two important operators which are frequently used to gain insight into the system's behavior are the Perron-Frobenius operator and the Koopman operator. Due to the curse of dimensionality, computing the eigenfunctions of high-dimensional systems is in general infeasible. We will propose a tensor-based reformulation of two numerical methods for computing finite-dimensional approximations of the aforementioned infinite-dimensional operators, namely Ulam's method and Extended Dynamic Mode Decomposition (EDMD). The aim of the tensor formulation is to approximate the eigenfunctions by low-rank tensors, potentially resulting in a significant reduction of the time and memory required to solve the resulting eigenvalue problems, provided that such a low-rank tensor decomposition exists. Typically, not all variables of a high-dimensional dynamical system contribute equally to the system's behavior, often the dynamics can be decomposed into slow and fast processes, which is also reflected in the eigenfunctions. Thus, the weak coupling between different variables might be approximated by low-rank tensor cores. We will illustrate the efficiency of the tensor-based formulation of Ulam's method and EDMD using simple stochastic differential equations
    • …
    corecore