12 research outputs found

    Rieoptax: Riemannian Optimization in JAX

    Full text link
    We present Rieoptax, an open source Python library for Riemannian optimization in JAX. We show that many differential geometric primitives, such as Riemannian exponential and logarithm maps, are usually faster in Rieoptax than existing frameworks in Python, both on CPU and GPU. We support various range of basic and advanced stochastic optimization solvers like Riemannian stochastic gradient, stochastic variance reduction, and adaptive gradient methods. A distinguishing feature of the proposed toolbox is that we also support differentially private optimization on Riemannian manifolds

    Riemannian Acceleration with Preconditioning for symmetric eigenvalue problems

    Full text link
    In this paper, we propose a Riemannian Acceleration with Preconditioning (RAP) for symmetric eigenvalue problems, which is one of the most important geodesically convex optimization problem on Riemannian manifold, and obtain the acceleration. Firstly, the preconditioning for symmetric eigenvalue problems from the Riemannian manifold viewpoint is discussed. In order to obtain the local geodesic convexity, we develop the leading angle to measure the quality of the preconditioner for symmetric eigenvalue problems. A new Riemannian acceleration, called Locally Optimal Riemannian Accelerated Gradient (LORAG) method, is proposed to overcome the local geodesic convexity for symmetric eigenvalue problems. With similar techniques for RAGD and analysis of local convex optimization in Euclidean space, we analyze the convergence of LORAG. Incorporating the local geodesic convexity of symmetric eigenvalue problems under preconditioning with the LORAG, we propose the Riemannian Acceleration with Preconditioning (RAP) and prove its acceleration. Additionally, when the Schwarz preconditioner, especially the overlapping or non-overlapping domain decomposition method, is applied for elliptic eigenvalue problems, we also obtain the rate of convergence as 1−Cκ−1/21-C\kappa^{-1/2}, where CC is a constant independent of the mesh sizes and the eigenvalue gap, κ=κνλ2/(λ2−λ1)\kappa=\kappa_{\nu}\lambda_{2}/(\lambda_{2}-\lambda_{1}), κν\kappa_{\nu} is the parameter from the stable decomposition, λ1\lambda_{1} and λ2\lambda_{2} are the smallest two eigenvalues of the elliptic operator. Numerical results show the power of Riemannian acceleration and preconditioning.Comment: Due to the limit in abstract of arXiv, the abstract here is shorter than in PD

    Strong Convexity of Sets in Riemannian Manifolds

    Full text link
    Convex curvature properties are important in designing and analyzing convex optimization algorithms in the Hilbertian or Riemannian settings. In the case of the Hilbertian setting, strongly convex sets are well studied. Herein, we propose various definitions of strong convexity for uniquely geodesic sets in a Riemannian manifold. We study their relationship, propose tools to determine the geodesic strongly convex nature of sets, and analyze the convergence of optimization algorithms over those sets. In particular, we demonstrate that the Riemannian Frank-Wolfe algorithm enjoys a global linear convergence rate when the Riemannian scaling inequalities hold

    Curvature and complexity: Better lower bounds for geodesically convex optimization

    Full text link
    We study the query complexity of geodesically convex (g-convex) optimization on a manifold. To isolate the effect of that manifold's curvature, we primarily focus on hyperbolic spaces. In a variety of settings (smooth or not; strongly g-convex or not; high- or low-dimensional), known upper bounds worsen with curvature. It is natural to ask whether this is warranted, or an artifact. For many such settings, we propose a first set of lower bounds which indeed confirm that (negative) curvature is detrimental to complexity. To do so, we build on recent lower bounds (Hamilton and Moitra, 2021; Criscitiello and Boumal, 2022) for the particular case of smooth, strongly g-convex optimization. Using a number of techniques, we also secure lower bounds which capture dependence on condition number and optimality gap, which was not previously the case. We suspect these bounds are not optimal. We conjecture optimal ones, and support them with a matching lower bound for a class of algorithms which includes subgradient descent, and a lower bound for a related game. Lastly, to pinpoint the difficulty of proving lower bounds, we study how negative curvature influences (and sometimes obstructs) interpolation with g-convex functions.Comment: v1 to v2: Renamed the method of Rusciano 2019 from "center-of-gravity method" to "centerpoint method
    corecore