Accelerated Methods for Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties

Abstract

In this work, we study optimization problems of the form minxmaxyf(x,y)\min_x \max_y f(x, y), where f(x,y)f(x, y) is defined on a product Riemannian manifold M×N\mathcal{M} \times \mathcal{N} and is μx\mu_x-strongly geodesically convex (g-convex) in xx and μy\mu_y-strongly g-concave in yy, for μx,μy0\mu_x, \mu_y \geq 0. We design accelerated methods when ff is (Lx,Ly,Lxy)(L_x, L_y, L_{xy})-smooth and M\mathcal{M}, N\mathcal{N} are Hadamard. To that aim we introduce new g-convex optimization results, of independent interest: we show global linear convergence for metric-projected Riemannian gradient descent and improve existing accelerated methods by reducing geometric constants. Additionally, we complete the analysis of two previous works applying to the Riemannian min-max case by removing an assumption about iterates staying in a pre-specified compact set.Comment: added weakly-convex analysis, and some remark

    Similar works

    Full text

    thumbnail-image

    Available Versions