Nonconvex-nonconcave minimax optimization has received intense attention over
the last decade due to its broad applications in machine learning.
Unfortunately, most existing algorithms cannot be guaranteed to converge
globally and even suffer from limit cycles. To address this issue, we propose a
novel single-loop algorithm called doubly smoothed gradient descent ascent
method (DSGDA), which naturally balances the primal and dual updates. The
proposed DSGDA can get rid of limit cycles in various challenging
nonconvex-nonconcave examples in the literature, including Forsaken,
Bilinearly-coupled minimax, Sixth-order polynomial, and PolarGame. We further
show that under an one-sided Kurdyka-\L{}ojasiewicz condition with exponent
θ∈(0,1) (resp. convex primal/concave dual function), DSGDA can find a
game-stationary point with an iteration complexity of
O(ϵ−2max{2θ,1}) (resp.
O(ϵ−4)). These match the best results for single-loop
algorithms that solve nonconvex-concave or convex-nonconcave minimax problems,
or problems satisfying the rather restrictive one-sided Polyak-\L{}ojasiewicz
condition. Our work demonstrates, for the first time, the possibility of having
a simple and unified single-loop algorithm for solving nonconvex-nonconcave,
nonconvex-concave, and convex-nonconcave minimax problems