2,294 research outputs found
Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions
Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
Competitive Gradient Descent
We introduce a new algorithm for the numerical computation of Nash equilibria
of competitive two-player games. Our method is a natural generalization of
gradient descent to the two-player setting where the update is given by the
Nash equilibrium of a regularized bilinear local approximation of the
underlying game. It avoids oscillatory and divergent behaviors seen in
alternating gradient descent. Using numerical experiments and rigorous
analysis, we provide a detailed comparison to methods based on \emph{optimism}
and \emph{consensus} and show that our method avoids making any unnecessary
changes to the gradient dynamics while achieving exponential (local)
convergence for (locally) convex-concave zero sum games. Convergence and
stability properties of our method are robust to strong interactions between
the players, without adapting the stepsize, which is not the case with previous
methods. In our numerical experiments on non-convex-concave problems, existing
methods are prone to divergence and instability due to their sensitivity to
interactions among the players, whereas we never observe divergence of our
algorithm. The ability to choose larger stepsizes furthermore allows our
algorithm to achieve faster convergence, as measured by the number of model
evaluations.Comment: Appeared in NeurIPS 2019. This version corrects an error in theorem
2.2. Source code used for the numerical experiments can be found under
http://github.com/f-t-s/CGD. A high-level overview of this work can be found
under http://f-t-s.github.io/projects/cgd
Differentiable Game Mechanics
Deep learning is built on the foundational guarantee that gradient descent on
an objective function converges to local minima. Unfortunately, this guarantee
fails in settings, such as generative adversarial nets, that exhibit multiple
interacting losses. The behavior of gradient-based methods in games is not well
understood -- and is becoming increasingly important as adversarial and
multi-objective architectures proliferate. In this paper, we develop new tools
to understand and control the dynamics in n-player differentiable games.
The key result is to decompose the game Jacobian into two components. The
first, symmetric component, is related to potential games, which reduce to
gradient descent on an implicit function. The second, antisymmetric component,
relates to Hamiltonian games, a new class of games that obey a conservation law
akin to conservation laws in classical mechanical systems. The decomposition
motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding
stable fixed points in differentiable games. Basic experiments show SGA is
competitive with recently proposed algorithms for finding stable fixed points
in GANs -- while at the same time being applicable to, and having guarantees
in, much more general cases.Comment: JMLR 2019, journal version of arXiv:1802.0564
- …