12 research outputs found
A semi-Lagrangian Vlasov solver in tensor train format
In this article, we derive a semi-Lagrangian scheme for the solution of the
Vlasov equation represented as a low-parametric tensor. Grid-based methods for
the Vlasov equation have been shown to give accurate results but their use has
mostly been limited to simulations in two dimensional phase space due to
extensive memory requirements in higher dimensions. Compression of the solution
via high-order singular value decomposition can help in reducing the storage
requirements and the tensor train (TT) format provides efficient basic linear
algebra routines for low-rank representations of tensors. In this paper, we
develop interpolation formulas for a semi-Lagrangian solver in TT format. In
order to efficiently implement the method, we propose a compression of the
matrix representing the interpolation step and an efficient implementation of
the Hadamard product. We show numerical simulations for standard test cases in
two, four and six dimensional phase space. Depending on the test case, the
memory requirements reduce by a factor in four and a factor
in six dimensions compared to the full-grid method
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
On Algorithms for and Computing with the Tensor Ring Decomposition
Tensor decompositions such as the canonical format and the tensor train
format have been widely utilized to reduce storage costs and operational
complexities for high-dimensional data, achieving linear scaling with the input
dimension instead of exponential scaling. In this paper, we investigate even
lower storage-cost representations in the tensor ring format, which is an
extension of the tensor train format with variable end-ranks. Firstly, we
introduce two algorithms for converting a tensor in full format to tensor ring
format with low storage cost. Secondly, we detail a rounding operation for
tensor rings and show how this requires new definitions of common linear
algebra operations in the format to obtain storage-cost savings. Lastly, we
introduce algorithms for transforming the graph structure of graph-based tensor
formats, with orders of magnitude lower complexity than existing literature.
The efficiency of all algorithms is demonstrated on a number of numerical
examples, and in certain cases, we demonstrate significantly higher compression
ratios when compared to previous approaches to using the tensor ring format.Comment: 24 pages, 3 figures, 6 tables, implementation of algorithms available
at https://github.com/oscarmickelin/tensor-ring-decompositio