22,684 research outputs found
Differentiable Game Mechanics
Deep learning is built on the foundational guarantee that gradient descent on
an objective function converges to local minima. Unfortunately, this guarantee
fails in settings, such as generative adversarial nets, that exhibit multiple
interacting losses. The behavior of gradient-based methods in games is not well
understood -- and is becoming increasingly important as adversarial and
multi-objective architectures proliferate. In this paper, we develop new tools
to understand and control the dynamics in n-player differentiable games.
The key result is to decompose the game Jacobian into two components. The
first, symmetric component, is related to potential games, which reduce to
gradient descent on an implicit function. The second, antisymmetric component,
relates to Hamiltonian games, a new class of games that obey a conservation law
akin to conservation laws in classical mechanical systems. The decomposition
motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding
stable fixed points in differentiable games. Basic experiments show SGA is
competitive with recently proposed algorithms for finding stable fixed points
in GANs -- while at the same time being applicable to, and having guarantees
in, much more general cases.Comment: JMLR 2019, journal version of arXiv:1802.0564
- …