618 research outputs found

    Randomized Riemannian Preconditioning for Orthogonality Constrained Problems

    Get PDF
    Optimization problems with (generalized) orthogonality constraints are prevalent across science and engineering. For example, in computational science they arise in the symmetric (generalized) eigenvalue problem, in nonlinear eigenvalue problems, and in electronic structures computations, to name a few problems. In statistics and machine learning, they arise, for example, in canonical correlation analysis and in linear discriminant analysis. In this article, we consider using randomized preconditioning in the context of optimization problems with generalized orthogonality constraints. Our proposed algorithms are based on Riemannian optimization on the generalized Stiefel manifold equipped with a non-standard preconditioned geometry, which necessitates development of the geometric components necessary for developing algorithms based on this approach. Furthermore, we perform asymptotic convergence analysis of the preconditioned algorithms which help to characterize the quality of a given preconditioner using second-order information. Finally, for the problems of canonical correlation analysis and linear discriminant analysis, we develop randomized preconditioners along with corresponding bounds on the relevant condition number

    Riemannian preconditioning

    Get PDF
    This paper exploits a basic connection between sequential quadratic programming and Riemannian gradient optimization to address the general question of selecting a metric in Riemannian optimization, in particular when the Riemannian structure is sought on a quotient manifold. The proposed method is shown to be particularly insightful and efficient in quadratic optimization with orthogonality and/or rank constraints, which covers most current applications of Riemannian optimization in matrix manifolds.Belgium Science Policy Office, FNRS (Belgium)This is the author accepted manuscript. The final version is available from The Society for Industrial and Applied Mathematics via http://dx.doi.org/10.1137/14097086

    Riemannian Acceleration with Preconditioning for symmetric eigenvalue problems

    Full text link
    In this paper, we propose a Riemannian Acceleration with Preconditioning (RAP) for symmetric eigenvalue problems, which is one of the most important geodesically convex optimization problem on Riemannian manifold, and obtain the acceleration. Firstly, the preconditioning for symmetric eigenvalue problems from the Riemannian manifold viewpoint is discussed. In order to obtain the local geodesic convexity, we develop the leading angle to measure the quality of the preconditioner for symmetric eigenvalue problems. A new Riemannian acceleration, called Locally Optimal Riemannian Accelerated Gradient (LORAG) method, is proposed to overcome the local geodesic convexity for symmetric eigenvalue problems. With similar techniques for RAGD and analysis of local convex optimization in Euclidean space, we analyze the convergence of LORAG. Incorporating the local geodesic convexity of symmetric eigenvalue problems under preconditioning with the LORAG, we propose the Riemannian Acceleration with Preconditioning (RAP) and prove its acceleration. Additionally, when the Schwarz preconditioner, especially the overlapping or non-overlapping domain decomposition method, is applied for elliptic eigenvalue problems, we also obtain the rate of convergence as 1−Cκ−1/21-C\kappa^{-1/2}, where CC is a constant independent of the mesh sizes and the eigenvalue gap, κ=κνλ2/(λ2−λ1)\kappa=\kappa_{\nu}\lambda_{2}/(\lambda_{2}-\lambda_{1}), κν\kappa_{\nu} is the parameter from the stable decomposition, λ1\lambda_{1} and λ2\lambda_{2} are the smallest two eigenvalues of the elliptic operator. Numerical results show the power of Riemannian acceleration and preconditioning.Comment: Due to the limit in abstract of arXiv, the abstract here is shorter than in PD

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure

    Faster Randomized Methods for Orthogonality Constrained Problems

    Full text link
    Recent literature has advocated the use of randomized methods for accelerating the solution of various matrix problems arising throughout data science and computational science. One popular strategy for leveraging randomization is to use it as a way to reduce problem size. However, methods based on this strategy lack sufficient accuracy for some applications. Randomized preconditioning is another approach for leveraging randomization, which provides higher accuracy. The main challenge in using randomized preconditioning is the need for an underlying iterative method, thus randomized preconditioning so far have been applied almost exclusively to solving regression problems and linear systems. In this article, we show how to expand the application of randomized preconditioning to another important set of problems prevalent across data science: optimization problems with (generalized) orthogonality constraints. We demonstrate our approach, which is based on the framework of Riemannian optimization and Riemannian preconditioning, on the problem of computing the dominant canonical correlations and on the Fisher linear discriminant analysis problem. For both problems, we evaluate the effect of preconditioning on the computational costs and asymptotic convergence, and demonstrate empirically the utility of our approach

    A Riemannian View on Shape Optimization

    Full text link
    Shape optimization based on the shape calculus is numerically mostly performed by means of steepest descent methods. This paper provides a novel framework to analyze shape-Newton optimization methods by exploiting a Riemannian perspective. A Riemannian shape Hessian is defined yielding often sought properties like symmetry and quadratic convergence for Newton optimization methods.Comment: 15 pages, 1 figure, 1 table. Forschungsbericht / Universit\"at Trier, Mathematik, Informatik 2012,
    • …
    corecore