Doubly Smoothed GDA: Global Convergent Algorithm for Constrained Nonconvex-Nonconcave Minimax Optimization

Abstract

Nonconvex-nonconcave minimax optimization has received intense attention over the last decade due to its broad applications in machine learning. Unfortunately, most existing algorithms cannot be guaranteed to converge globally and even suffer from limit cycles. To address this issue, we propose a novel single-loop algorithm called doubly smoothed gradient descent ascent method (DSGDA), which naturally balances the primal and dual updates. The proposed DSGDA can get rid of limit cycles in various challenging nonconvex-nonconcave examples in the literature, including Forsaken, Bilinearly-coupled minimax, Sixth-order polynomial, and PolarGame. We further show that under an one-sided Kurdyka-\L{}ojasiewicz condition with exponent θ(0,1)\theta\in(0,1) (resp. convex primal/concave dual function), DSGDA can find a game-stationary point with an iteration complexity of O(ϵ2max{2θ,1})\mathcal{O}(\epsilon^{-2\max\{2\theta,1\}}) (resp. O(ϵ4)\mathcal{O}(\epsilon^{-4})). These match the best results for single-loop algorithms that solve nonconvex-concave or convex-nonconcave minimax problems, or problems satisfying the rather restrictive one-sided Polyak-\L{}ojasiewicz condition. Our work demonstrates, for the first time, the possibility of having a simple and unified single-loop algorithm for solving nonconvex-nonconcave, nonconvex-concave, and convex-nonconcave minimax problems

    Similar works

    Full text

    thumbnail-image

    Available Versions