36,313 research outputs found
Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems
In this paper we review basic and emerging models and associated algorithms
for large-scale tensor networks, especially Tensor Train (TT) decompositions
using novel mathematical and graphical representations. We discus the concept
of tensorization (i.e., creating very high-order tensors from lower-order
original data) and super compression of data achieved via quantized tensor
train (QTT) networks. The purpose of a tensorization and quantization is to
achieve, via low-rank tensor approximations "super" compression, and
meaningful, compact representation of structured data. The main objective of
this paper is to show how tensor networks can be used to solve a wide class of
big data optimization problems (that are far from tractable by classical
numerical methods) by applying tensorization and performing all operations
using relatively small size matrices and tensors and applying iteratively
optimized and approximative tensor contractions.
Keywords: Tensor networks, tensor train (TT) decompositions, matrix product
states (MPS), matrix product operators (MPO), basic tensor operations,
tensorization, distributed representation od data optimization problems for
very large-scale problems: generalized eigenvalue decomposition (GEVD),
PCA/SVD, canonical correlation analysis (CCA).Comment: arXiv admin note: text overlap with arXiv:1403.204
Tensor Networks for Solving Realistic Time-independent Boltzmann Neutron Transport Equation
Tensor network techniques, known for their low-rank approximation ability
that breaks the curse of dimensionality, are emerging as a foundation of new
mathematical methods for ultra-fast numerical solutions of high-dimensional
Partial Differential Equations (PDEs). Here, we present a mixed Tensor Train
(TT)/Quantized Tensor Train (QTT) approach for the numerical solution of
time-independent Boltzmann Neutron Transport equations (BNTEs) in Cartesian
geometry. Discretizing a realistic three-dimensional (3D) BNTE by (i) diamond
differencing, (ii) multigroup-in-energy, and (iii) discrete ordinate
collocation leads to huge generalized eigenvalue problems that generally
require a matrix-free approach and large computer clusters. Starting from this
discretization, we construct a TT representation of the PDE fields and discrete
operators, followed by a QTT representation of the TT cores and solving the
tensorized generalized eigenvalue problem in a fixed-point scheme with tensor
network optimization techniques. We validate our approach by applying it to two
realistic examples of 3D neutron transport problems, currently solved by the
PARallel TIme-dependent SN (PARTISN) solver. We demonstrate that our TT/QTT
method, executed on a standard desktop computer, leads to a yottabyte
compression of the memory storage, and more than 7500 times speedup with a
discrepancy of less than 1e-5 when compared to the PARTISN solution.Comment: 38 pages, 9 figure
A Semismooth Newton Method for Tensor Eigenvalue Complementarity Problem
In this paper, we consider the tensor eigenvalue complementarity problem
which is closely related to the optimality conditions for polynomial
optimization, as well as a class of differential inclusions with nonconvex
processes. By introducing an NCP-function, we reformulate the tensor eigenvalue
complementarity problem as a system of nonlinear equations. We show that this
function is strongly semismooth but not differentiable, in which case the
classical smoothing methods cannot apply. Furthermore, we propose a damped
semismooth Newton method for tensor eigenvalue complementarity problem. A new
procedure to evaluate an element of the generalized Jocobian is given, which
turns out to be an element of the B-subdifferential under mild assumptions. As
a result, the convergence of the damped semismooth Newton method is guaranteed
by existing results. The numerical experiments also show that our method is
efficient and promising
Towards tensor-based methods for the numerical approximation of the Perron-Frobenius and Koopman operator
The global behavior of dynamical systems can be studied by analyzing the
eigenvalues and corresponding eigenfunctions of linear operators associated
with the system. Two important operators which are frequently used to gain
insight into the system's behavior are the Perron-Frobenius operator and the
Koopman operator. Due to the curse of dimensionality, computing the
eigenfunctions of high-dimensional systems is in general infeasible. We will
propose a tensor-based reformulation of two numerical methods for computing
finite-dimensional approximations of the aforementioned infinite-dimensional
operators, namely Ulam's method and Extended Dynamic Mode Decomposition (EDMD).
The aim of the tensor formulation is to approximate the eigenfunctions by
low-rank tensors, potentially resulting in a significant reduction of the time
and memory required to solve the resulting eigenvalue problems, provided that
such a low-rank tensor decomposition exists. Typically, not all variables of a
high-dimensional dynamical system contribute equally to the system's behavior,
often the dynamics can be decomposed into slow and fast processes, which is
also reflected in the eigenfunctions. Thus, the weak coupling between different
variables might be approximated by low-rank tensor cores. We will illustrate
the efficiency of the tensor-based formulation of Ulam's method and EDMD using
simple stochastic differential equations
- …