102 research outputs found

    Two-sided Grassmann-Rayleigh quotient iteration

    Full text link
    The two-sided Rayleigh quotient iteration proposed by Ostrowski computes a pair of corresponding left-right eigenvectors of a matrix CC. We propose a Grassmannian version of this iteration, i.e., its iterates are pairs of pp-dimensional subspaces instead of one-dimensional subspaces in the classical case. The new iteration generically converges locally cubically to the pairs of left-right pp-dimensional invariant subspaces of CC. Moreover, Grassmannian versions of the Rayleigh quotient iteration are given for the generalized Hermitian eigenproblem, the Hamiltonian eigenproblem and the skew-Hamiltonian eigenproblem.Comment: The text is identical to a manuscript that was submitted for publication on 19 April 200

    Convergence of Gradient Descent for Low-Rank Matrix Approximation

    Get PDF
    This paper provides a proof of global convergence of gradient search for low-rank matrix approximation. Such approximations have recently been of interest for large-scale problems, as well as for dictionary learning for sparse signal representations and matrix completion. The proof is based on the interpretation of the problem as an optimization on the Grassmann manifold and Fubiny-Study distance on this space

    Gradient-type subspace iteration methods for the symmetric eigenvalue problem

    Full text link
    This paper explores variants of the subspace iteration algorithm for computing approximate invariant subspaces. The standard subspace iteration approach is revisited and new variants that exploit gradient-type techniques combined with a Grassmann manifold viewpoint are developed. A gradient method as well as a conjugate gradient technique are described. Convergence of the gradient-based algorithm is analyzed and a few numerical experiments are reported, indicating that the proposed algorithms are sometimes superior to a standard Chebyshev-based subspace iteration when compared in terms of number of matrix vector products, but do not require estimating optimal parameters. An important contribution of this paper to achieve this good performance is the accurate and efficient implementation of an exact line search. In addition, new convergence proofs are presented for the non-accelerated gradient method that includes a locally exponential convergence if started in a O(δ)\mathcal{O(\sqrt{\delta})} neighbourhood of the dominant subspace with spectral gap δ\delta.Comment: 29 page

    Riemannian preconditioning

    Get PDF
    This paper exploits a basic connection between sequential quadratic programming and Riemannian gradient optimization to address the general question of selecting a metric in Riemannian optimization, in particular when the Riemannian structure is sought on a quotient manifold. The proposed method is shown to be particularly insightful and efficient in quadratic optimization with orthogonality and/or rank constraints, which covers most current applications of Riemannian optimization in matrix manifolds.Belgium Science Policy Office, FNRS (Belgium)This is the author accepted manuscript. The final version is available from The Society for Industrial and Applied Mathematics via http://dx.doi.org/10.1137/14097086

    Geodesic Convexity of the Symmetric Eigenvalue Problem and Convergence of Riemannian Steepest Descent

    Full text link
    We study the convergence of the Riemannian steepest descent algorithm on the Grassmann manifold for minimizing the block version of the Rayleigh quotient of a symmetric and positive semi-definite matrix. Even though this problem is non-convex in the Euclidean sense and only very locally convex in the Riemannian sense, we discover a structure for this problem that is similar to geodesic strong convexity, namely, weak-strong convexity. This allows us to apply similar arguments from convex optimization when studying the convergence of the steepest descent algorithm but with initialization conditions that do not depend on the eigengap δ\delta. When δ>0\delta>0, we prove exponential convergence rates, while otherwise the convergence is algebraic. Additionally, we prove that this problem is geodesically convex in a neighbourhood of the global minimizer of radius O(δ)O(\sqrt{\delta})

    A new approach to numerical algorithms

    Get PDF
    In this paper we developed a new Lanczos algorithm on the Grassmann manifold. This work comes in the wake of the article by A. Edelman, T. A. Arias and S. T. Smith, “The geometry of algorithms with orthogonality constraints

    GrassmannOptim: An R Package for Grassmann Manifold Optimization

    Get PDF
    The optimization of a real-valued objective function f(U), where U is a p X d,p > d, semi-orthogonal matrix such that UTU=Id, and f is invariant under right orthogonal transformation of U, is often referred to as a Grassmann manifold optimization. Manifold optimization appears in a wide variety of computational problems in the applied sciences. In this article, we present GrassmannOptim, an R package for Grassmann manifold optimization. The implementation uses gradient-based algorithms and embeds a stochastic gradient method for global search. We describe the algorithms, provide some illustrative examples on the relevance of manifold optimization and finally, show some practical usages of the package

    Accelerated Line-search and Trust-region Methods

    Full text link
    • …
    corecore