336 research outputs found

    A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition

    Full text link
    A new algorithm is presented for computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one components. Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear generalized minimal residual (GMRES) approach, recombining previous iterates in an optimal way, and essentially using the stand-alone one-step process as a preconditioner. In particular, the nonlinear extension of GMRES is used that was proposed by Washio and Oosterlee in [ETNA Vol. 15 (2003), pp. 165-185] for nonlinear partial differential equation problems. In the third step, a line search is performed for globalization. The resulting nonlinear GMRES (N-GMRES) optimization algorithm is applied to dense and sparse tensor decomposition test problems. The numerical tests show that ALS accelerated by N-GMRES may significantly outperform both stand-alone ALS and a standard nonlinear conjugate gradient optimization method, especially when highly accurate stationary points are desired for difficult problems. The proposed N-GMRES optimization algorithm is based on general concepts and may be applied to other nonlinear optimization problems

    Objective acceleration for unconstrained optimization

    Full text link
    Acceleration schemes can dramatically improve existing optimization procedures. In most of the work on these schemes, such as nonlinear Generalized Minimal Residual (N-GMRES), acceleration is based on minimizing the â„“2\ell_2 norm of some target on subspaces of Rn\mathbb{R}^n. There are many numerical examples that show how accelerating general purpose and domain-specific optimizers with N-GMRES results in large improvements. We propose a natural modification to N-GMRES, which significantly improves the performance in a testing environment originally used to advocate N-GMRES. Our proposed approach, which we refer to as O-ACCEL (Objective Acceleration), is novel in that it minimizes an approximation to the \emph{objective function} on subspaces of Rn\mathbb{R}^n. We prove that O-ACCEL reduces to the Full Orthogonalization Method for linear systems when the objective is quadratic, which differentiates our proposed approach from existing acceleration methods. Comparisons with L-BFGS and N-CG indicate the competitiveness of O-ACCEL. As it can be combined with domain-specific optimizers, it may also be beneficial in areas where L-BFGS or N-CG are not suitable.Comment: 18 pages, 6 figures, 5 table

    Low-rank approximate inverse for preconditioning tensor-structured linear systems

    Full text link
    In this paper, we propose an algorithm for the construction of low-rank approximations of the inverse of an operator given in low-rank tensor format. The construction relies on an updated greedy algorithm for the minimization of a suitable distance to the inverse operator. It provides a sequence of approximations that are defined as the projections of the inverse operator in an increasing sequence of linear subspaces of operators. These subspaces are obtained by the tensorization of bases of operators that are constructed from successive rank-one corrections. In order to handle high-order tensors, approximate projections are computed in low-rank Hierarchical Tucker subsets of the successive subspaces of operators. Some desired properties such as symmetry or sparsity can be imposed on the approximate inverse operator during the correction step, where an optimal rank-one correction is searched as the tensor product of operators with the desired properties. Numerical examples illustrate the ability of this algorithm to provide efficient preconditioners for linear systems in tensor format that improve the convergence of iterative solvers and also the quality of the resulting low-rank approximations of the solution

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    FINAL REPORT (MILESTONE DATE 9/30/13) FOR SUBCONTRACT NO. B603393: "CLUSTERING AND RANDOMIZATION FOR LARGE GRAPHS AND HYPERGRAPHS"

    Full text link
    • …
    corecore