447 research outputs found
Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class of Nonconvex-Nonconcave Minimax Problems
In this paper, we consider a class of nonconvex-nonconcave minimax problems,
i.e., NC-PL minimax problems, whose objective functions satisfy the
Polyak-\Lojasiewicz (PL) condition with respect to the inner variable. We
propose a zeroth-order alternating gradient descent ascent (ZO-AGDA) algorithm
and a zeroth-order variance reduced alternating gradient descent ascent
(ZO-VRAGDA) algorithm for solving NC-PL minimax problem under the deterministic
and the stochastic setting, respectively. The number of iterations to obtain an
-stationary point of ZO-AGDA and ZO-VRAGDA algorithm for solving
NC-PL minimax problem is upper bounded by and
, respectively. To the best of our knowledge,
they are the first two zeroth-order algorithms with the iteration complexity
gurantee for solving NC-PL minimax problems
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
We study Frank-Wolfe methods for nonconvex stochastic and finite-sum
optimization problems. Frank-Wolfe methods (in the convex case) have gained
tremendous recent interest in machine learning and optimization communities due
to their projection-free property and their ability to exploit structured
constraints. However, our understanding of these algorithms in the nonconvex
setting is fairly limited. In this paper, we propose nonconvex stochastic
Frank-Wolfe methods and analyze their convergence properties. For objective
functions that decompose into a finite-sum, we leverage ideas from variance
reduction techniques for convex optimization to obtain new variance reduced
nonconvex Frank-Wolfe methods that have provably faster convergence than the
classical Frank-Wolfe method. Finally, we show that the faster convergence
rates of our variance reduced methods also translate into improved convergence
rates for the stochastic setting
- …