7,413 research outputs found
Multi-resolution Low-rank Tensor Formats
We describe a simple, black-box compression format for tensors with a
multiscale structure. By representing the tensor as a sum of compressed tensors
defined on increasingly coarse grids, we capture low-rank structures on each
grid-scale, and we show how this leads to an increase in compression for a
fixed accuracy. We devise an alternating algorithm to represent a given tensor
in the multiresolution format and prove local convergence guarantees. In two
dimensions, we provide examples that show that this approach can beat the
Eckart-Young theorem, and for dimensions higher than two, we achieve higher
compression than the tensor-train format on six real-world datasets. We also
provide results on the closedness and stability of the tensor format and
discuss how to perform common linear algebra operations on the level of the
compressed tensors.Comment: 29 pages, 9 figure
A semi-Lagrangian Vlasov solver in tensor train format
In this article, we derive a semi-Lagrangian scheme for the solution of the
Vlasov equation represented as a low-parametric tensor. Grid-based methods for
the Vlasov equation have been shown to give accurate results but their use has
mostly been limited to simulations in two dimensional phase space due to
extensive memory requirements in higher dimensions. Compression of the solution
via high-order singular value decomposition can help in reducing the storage
requirements and the tensor train (TT) format provides efficient basic linear
algebra routines for low-rank representations of tensors. In this paper, we
develop interpolation formulas for a semi-Lagrangian solver in TT format. In
order to efficiently implement the method, we propose a compression of the
matrix representing the interpolation step and an efficient implementation of
the Hadamard product. We show numerical simulations for standard test cases in
two, four and six dimensional phase space. Depending on the test case, the
memory requirements reduce by a factor in four and a factor
in six dimensions compared to the full-grid method
The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems
We present a compendium of numerical simulation techniques, based on tensor
network methods, aiming to address problems of many-body quantum mechanics on a
classical computer. The core setting of this anthology are lattice problems in
low spatial dimension at finite size, a physical scenario where tensor network
methods, both Density Matrix Renormalization Group and beyond, have long proven
to be winning strategies. Here we explore in detail the numerical frameworks
and methods employed to deal with low-dimension physical setups, from a
computational physics perspective. We focus on symmetries and closed-system
simulations in arbitrary boundary conditions, while discussing the numerical
data structures and linear algebra manipulation routines involved, which form
the core libraries of any tensor network code. At a higher level, we put the
spotlight on loop-free network geometries, discussing their advantages, and
presenting in detail algorithms to simulate low-energy equilibrium states.
Accompanied by discussions of data structures, numerical techniques and
performance, this anthology serves as a programmer's companion, as well as a
self-contained introduction and review of the basic and selected advanced
concepts in tensor networks, including examples of their applications.Comment: 115 pages, 56 figure
- …