393 research outputs found
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
The condition number of join decompositions
The join set of a finite collection of smooth embedded submanifolds of a
mutual vector space is defined as their Minkowski sum. Join decompositions
generalize some ubiquitous decompositions in multilinear algebra, namely tensor
rank, Waring, partially symmetric rank and block term decompositions. This
paper examines the numerical sensitivity of join decompositions to
perturbations; specifically, we consider the condition number for general join
decompositions. It is characterized as a distance to a set of ill-posed points
in a supplementary product of Grassmannians. We prove that this condition
number can be computed efficiently as the smallest singular value of an
auxiliary matrix. For some special join sets, we characterized the behavior of
sequences in the join set converging to the latter's boundary points. Finally,
we specialize our discussion to the tensor rank and Waring decompositions and
provide several numerical experiments confirming the key results
Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats
This paper is concerned with the approximation of tensors using tree-based
tensor formats, which are tensor networks whose graphs are dimension partition
trees. We consider Hilbert tensor spaces of multivariate functions defined on a
product set equipped with a probability measure. This includes the case of
multidimensional arrays corresponding to finite product sets. We propose and
analyse an algorithm for the construction of an approximation using only point
evaluations of a multivariate function, or evaluations of some entries of a
multidimensional array. The algorithm is a variant of higher-order singular
value decomposition which constructs a hierarchy of subspaces associated with
the different nodes of the tree and a corresponding hierarchy of interpolation
operators. Optimal subspaces are estimated using empirical principal component
analysis of interpolations of partial random evaluations of the function. The
algorithm is able to provide an approximation in any tree-based format with
either a prescribed rank or a prescribed relative error, with a number of
evaluations of the order of the storage complexity of the approximation format.
Under some assumptions on the estimation of principal components, we prove that
the algorithm provides either a quasi-optimal approximation with a given rank,
or an approximation satisfying the prescribed relative error, up to constants
depending on the tree and the properties of interpolation operators. The
analysis takes into account the discretization errors for the approximation of
infinite-dimensional tensors. Several numerical examples illustrate the main
results and the behavior of the algorithm for the approximation of
high-dimensional functions using hierarchical Tucker or tensor train tensor
formats, and the approximation of univariate functions using tensorization
Theory and Algorithms for Reliable Multimodal Data Analysis, Machine Learning, and Signal Processing
Modern engineering systems collect large volumes of data measurements across diverse sensing modalities. These measurements can naturally be arranged in higher-order arrays of scalars which are commonly referred to as tensors. Tucker decomposition (TD) is a standard method for tensor analysis with applications in diverse fields of science and engineering. Despite its success, TD exhibits severe sensitivity against outliers —i.e., heavily corrupted entries that appear sporadically in modern datasets. We study L1-norm TD (L1-TD), a reformulation of TD that promotes robustness. For 3-way tensors, we show, for the first time, that L1-TD admits an exact solution via combinatorial optimization and present algorithms for its solution. We propose two novel algorithmic frameworks for approximating the exact solution to L1-TD, for general N-way tensors. We propose a novel algorithm for dynamic L1-TD —i.e., efficient and joint analysis of streaming tensors. Principal-Component Analysis (PCA) (a special case of TD) is also outlier responsive. We consider Lp-quasinorm PCA (Lp-PCA) for
- …