1,396 research outputs found

    A new convergence proof for the higher-order power method and generalizations

    Full text link
    A proof for the point-wise convergence of the factors in the higher-order power method for tensors towards a critical point is given. It is obtained by applying established results from the theory of \L{}ojasiewicz inequalities to the equivalent, unconstrained alternating least squares algorithm for best rank-one tensor approximation

    On best rank one approximation of tensors

    Get PDF
    In this paper we suggest a new algorithm for the computation of a best rank one approximation of tensors, called alternating singular value decomposition. This method is based on the computation of maximal singular values and the corresponding singular vectors of matrices. We also introduce a modification for this method and the alternating least squares method, which ensures that alternating iterations will always converge to a semi-maximal point. (A critical point in several vector variables is semi-maximal if it is maximal with respect to each vector variable, while other vector variables are kept fixed.) We present several numerical examples that illustrate the computational performance of the new method in comparison to the alternating least square method.Comment: 17 pages and 6 figure

    On convergence of the maximum block improvement method

    Get PDF
    Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method

    Rank-1 Tensor Approximation Methods and Application to Deflation

    Full text link
    Because of the attractiveness of the canonical polyadic (CP) tensor decomposition in various applications, several algorithms have been designed to compute it, but efficient ones are still lacking. Iterative deflation algorithms based on successive rank-1 approximations can be used to perform this task, since the latter are rather easy to compute. We first present an algebraic rank-1 approximation method that performs better than the standard higher-order singular value decomposition (HOSVD) for three-way tensors. Second, we propose a new iterative rank-1 approximation algorithm that improves any other rank-1 approximation method. Third, we describe a probabilistic framework allowing to study the convergence of deflation CP decomposition (DCPD) algorithms based on successive rank-1 approximations. A set of computer experiments then validates theoretical results and demonstrates the efficiency of DCPD algorithms compared to other ones

    Convergence of Alternating Least Squares Optimisation for Rank-One Approximation to High Order Tensors

    Full text link
    The approximation of tensors has important applications in various disciplines, but it remains an extremely challenging task. It is well known that tensors of higher order can fail to have best low-rank approximations, but with an important exception that best rank-one approximations always exists. The most popular approach to low-rank approximation is the alternating least squares (ALS) method. The convergence of the alternating least squares algorithm for the rank-one approximation problem is analysed in this paper. In our analysis we are focusing on the global convergence and the rate of convergence of the ALS algorithm. It is shown that the ALS method can converge sublinearly, Q-linearly, and even Q-superlinearly. Our theoretical results are illustrated on explicit examples.Comment: tensor format, tensor representation, alternating least squares optimisation, orthogonal projection metho

    Alternating least squares as moving subspace correction

    Full text link
    In this note we take a new look at the local convergence of alternating optimization methods for low-rank matrices and tensors. Our abstract interpretation as sequential optimization on moving subspaces yields insightful reformulations of some known convergence conditions that focus on the interplay between the contractivity of classical multiplicative Schwarz methods with overlapping subspaces and the curvature of low-rank matrix and tensor manifolds. While the verification of the abstract conditions in concrete scenarios remains open in most cases, we are able to provide an alternative and conceptually simple derivation of the asymptotic convergence rate of the two-sided block power method of numerical algebra for computing the dominant singular subspaces of a rectangular matrix. This method is equivalent to an alternating least squares method applied to a distance function. The theoretical results are illustrated and validated by numerical experiments.Comment: 20 pages, 4 figure

    Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-11 Updates

    Full text link
    In this paper, we provide local and global convergence guarantees for recovering CP (Candecomp/Parafac) tensor decomposition. The main step of the proposed algorithm is a simple alternating rank-11 update which is the alternating version of the tensor power iteration adapted for asymmetric tensors. Local convergence guarantees are established for third order tensors of rank kk in dd dimensions, when k=o(d1.5)k=o \bigl( d^{1.5} \bigr) and the tensor components are incoherent. Thus, we can recover overcomplete tensor decomposition. We also strengthen the results to global convergence guarantees under stricter rank condition k≤βdk \le \beta d (for arbitrary constant β>1\beta > 1) through a simple initialization procedure where the algorithm is initialized by top singular vectors of random tensor slices. Furthermore, the approximate local convergence guarantees for pp-th order tensors are also provided under rank condition k=o(dp/2)k=o \bigl( d^{p/2} \bigr). The guarantees also include tight perturbation analysis given noisy tensor.Comment: We have added an additional sub-algorithm to remove the (approximate) residual error left after the tensor power iteratio

    Multi-resolution Low-rank Tensor Formats

    Full text link
    We describe a simple, black-box compression format for tensors with a multiscale structure. By representing the tensor as a sum of compressed tensors defined on increasingly coarse grids, we capture low-rank structures on each grid-scale, and we show how this leads to an increase in compression for a fixed accuracy. We devise an alternating algorithm to represent a given tensor in the multiresolution format and prove local convergence guarantees. In two dimensions, we provide examples that show that this approach can beat the Eckart-Young theorem, and for dimensions higher than two, we achieve higher compression than the tensor-train format on six real-world datasets. We also provide results on the closedness and stability of the tensor format and discuss how to perform common linear algebra operations on the level of the compressed tensors.Comment: 29 pages, 9 figure

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images
    • …
    corecore