3,172 research outputs found
On the Convergence of Alternating Least Squares Optimisation in Tensor Format Representations
The approximation of tensors is important for the efficient numerical
treatment of high dimensional problems, but it remains an extremely challenging
task. One of the most popular approach to tensor approximation is the
alternating least squares method. In our study, the convergence of the
alternating least squares algorithm is considered. The analysis is done for
arbitrary tensor format representations and based on the multiliearity of the
tensor format. In tensor format representation techniques, tensors are
approximated by multilinear combinations of objects lower dimensionality. The
resulting reduction of dimensionality not only reduces the amount of required
storage but also the computational effort.Comment: arXiv admin note: text overlap with arXiv:1503.0543
Multi-resolution Low-rank Tensor Formats
We describe a simple, black-box compression format for tensors with a
multiscale structure. By representing the tensor as a sum of compressed tensors
defined on increasingly coarse grids, we capture low-rank structures on each
grid-scale, and we show how this leads to an increase in compression for a
fixed accuracy. We devise an alternating algorithm to represent a given tensor
in the multiresolution format and prove local convergence guarantees. In two
dimensions, we provide examples that show that this approach can beat the
Eckart-Young theorem, and for dimensions higher than two, we achieve higher
compression than the tensor-train format on six real-world datasets. We also
provide results on the closedness and stability of the tensor format and
discuss how to perform common linear algebra operations on the level of the
compressed tensors.Comment: 29 pages, 9 figure
Rank-1 Tensor Approximation Methods and Application to Deflation
Because of the attractiveness of the canonical polyadic (CP) tensor
decomposition in various applications, several algorithms have been designed to
compute it, but efficient ones are still lacking. Iterative deflation
algorithms based on successive rank-1 approximations can be used to perform
this task, since the latter are rather easy to compute. We first present an
algebraic rank-1 approximation method that performs better than the standard
higher-order singular value decomposition (HOSVD) for three-way tensors.
Second, we propose a new iterative rank-1 approximation algorithm that improves
any other rank-1 approximation method. Third, we describe a probabilistic
framework allowing to study the convergence of deflation CP decomposition
(DCPD) algorithms based on successive rank-1 approximations. A set of computer
experiments then validates theoretical results and demonstrates the efficiency
of DCPD algorithms compared to other ones
Finding a low-rank basis in a matrix subspace
For a given matrix subspace, how can we find a basis that consists of
low-rank matrices? This is a generalization of the sparse vector problem. It
turns out that when the subspace is spanned by rank-1 matrices, the matrices
can be obtained by the tensor CP decomposition. For the higher rank case, the
situation is not as straightforward. In this work we present an algorithm based
on a greedy process applicable to higher rank problems. Our algorithm first
estimates the minimum rank by applying soft singular value thresholding to a
nuclear norm relaxation, and then computes a matrix with that rank using the
method of alternating projections. We provide local convergence results, and
compare our algorithm with several alternative approaches. Applications include
data compression beyond the classical truncated SVD, computing accurate
eigenvectors of a near-multiple eigenvalue, image separation and graph
Laplacian eigenproblems
Alternating least squares as moving subspace correction
In this note we take a new look at the local convergence of alternating
optimization methods for low-rank matrices and tensors. Our abstract
interpretation as sequential optimization on moving subspaces yields insightful
reformulations of some known convergence conditions that focus on the interplay
between the contractivity of classical multiplicative Schwarz methods with
overlapping subspaces and the curvature of low-rank matrix and tensor
manifolds. While the verification of the abstract conditions in concrete
scenarios remains open in most cases, we are able to provide an alternative and
conceptually simple derivation of the asymptotic convergence rate of the
two-sided block power method of numerical algebra for computing the dominant
singular subspaces of a rectangular matrix. This method is equivalent to an
alternating least squares method applied to a distance function. The
theoretical results are illustrated and validated by numerical experiments.Comment: 20 pages, 4 figure
- …