17,830 research outputs found
On best rank one approximation of tensors
In this paper we suggest a new algorithm for the computation of a best rank
one approximation of tensors, called alternating singular value decomposition.
This method is based on the computation of maximal singular values and the
corresponding singular vectors of matrices. We also introduce a modification
for this method and the alternating least squares method, which ensures that
alternating iterations will always converge to a semi-maximal point. (A
critical point in several vector variables is semi-maximal if it is maximal
with respect to each vector variable, while other vector variables are kept
fixed.) We present several numerical examples that illustrate the computational
performance of the new method in comparison to the alternating least square
method.Comment: 17 pages and 6 figure
Convergence of Alternating Least Squares Optimisation for Rank-One Approximation to High Order Tensors
The approximation of tensors has important applications in various
disciplines, but it remains an extremely challenging task. It is well known
that tensors of higher order can fail to have best low-rank approximations, but
with an important exception that best rank-one approximations always exists.
The most popular approach to low-rank approximation is the alternating least
squares (ALS) method. The convergence of the alternating least squares
algorithm for the rank-one approximation problem is analysed in this paper. In
our analysis we are focusing on the global convergence and the rate of
convergence of the ALS algorithm. It is shown that the ALS method can converge
sublinearly, Q-linearly, and even Q-superlinearly. Our theoretical results are
illustrated on explicit examples.Comment: tensor format, tensor representation, alternating least squares
optimisation, orthogonal projection metho
Towards a Vector Field Based Approach to the Proper Generalized Decomposition (PGD)
[EN] A novel algorithm called the Proper Generalized Decomposition (PGD) is widely used by the engineering community to compute the solution of high dimensional problems. However, it is well-known that the bottleneck of its practical implementation focuses on the computation of the so-called best rank-one approximation. Motivated by this fact, we are going to discuss some of the geometrical aspects of the best rank-one approximation procedure. More precisely, our main result is to construct explicitly a vector field over a low-dimensional vector space and to prove that we can identify its stationary points with the critical points of the best rank-one optimization problem. To obtain this result, we endow the set of tensors with fixed rank-one with an explicit geometric structureThis research was funded by the GVA/2019/124 grant from Generalitat Valenciana and by the RTI2018-093521-B-C32 grant from the Ministerio de Ciencia, Innovacion y Universidades.
DocumentFalco, A.; Hilario Pérez, L.; Montés Sánchez, N.; Mora Aguilar, MC.; Nadal, E. (2021). Towards a Vector Field Based Approach to the Proper Generalized Decomposition (PGD). Mathematics. 9(1):1-14. https://doi.org/10.3390/math9010034S1149
On orthogonal tensors and best rank-one approximation ratio
As is well known, the smallest possible ratio between the spectral norm and
the Frobenius norm of an matrix with is and
is (up to scalar scaling) attained only by matrices having pairwise orthonormal
rows. In the present paper, the smallest possible ratio between spectral and
Frobenius norms of tensors of order , also
called the best rank-one approximation ratio in the literature, is
investigated. The exact value is not known for most configurations of . Using a natural definition of orthogonal tensors over the real
field (resp., unitary tensors over the complex field), it is shown that the
obvious lower bound is attained if and only if a
tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal
or unitary tensors exist depends on the dimensions and the
field. A connection between the (non)existence of real orthogonal tensors of
order three and the classical Hurwitz problem on composition algebras can be
established: existence of orthogonal tensors of size
is equivalent to the admissibility of the triple to the Hurwitz
problem. Some implications for higher-order tensors are then given. For
instance, real orthogonal tensors of order
do exist, but only when . In the complex case, the situation is
more drastic: unitary tensors of size with exist only when . Finally, some numerical illustrations
for spectral norm computation are presented
Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
In the framework of multidimensional Compressed Sensing (CS), we introduce an
analytical reconstruction formula that allows one to recover an th-order
data tensor
from a reduced set of multi-way compressive measurements by exploiting its low
multilinear-rank structure. Moreover, we show that, an interesting property of
multi-way measurements allows us to build the reconstruction based on
compressive linear measurements taken only in two selected modes, independently
of the tensor order . In addition, it is proved that, in the matrix case and
in a particular case with rd-order tensors where the same 2D sensor operator
is applied to all mode-3 slices, the proposed reconstruction
is stable in the sense that the approximation
error is comparable to the one provided by the best low-multilinear-rank
approximation, where is a threshold parameter that controls the
approximation error. Through the analysis of the upper bound of the
approximation error we show that, in the 2D case, an optimal value for the
threshold parameter exists, which is confirmed by our
simulation results. On the other hand, our experiments on 3D datasets show that
very good reconstructions are obtained using , which means that this
parameter does not need to be tuned. Our extensive simulation results
demonstrate the stability and robustness of the method when it is applied to
real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity
based CS methods specialized for multidimensional signals is also included. A
very attractive characteristic of the proposed method is that it provides a
direct computation, i.e. it is non-iterative in contrast to all existing
sparsity based CS algorithms, thus providing super fast computations, even for
large datasets.Comment: Submitted to IEEE Transactions on Signal Processin
- …