862 research outputs found
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
A central challenge to many fields of science and engineering involves
minimizing non-convex error functions over continuous, high dimensional spaces.
Gradient descent or quasi-Newton methods are almost ubiquitously used to
perform such minimizations, and it is often thought that a main source of
difficulty for these local methods to find the global minimum is the
proliferation of local minima with much higher error than the global minimum.
Here we argue, based on results from statistical physics, random matrix theory,
neural network theory, and empirical evidence, that a deeper and more profound
difficulty originates from the proliferation of saddle points, not local
minima, especially in high dimensional problems of practical interest. Such
saddle points are surrounded by high error plateaus that can dramatically slow
down learning, and give the illusory impression of the existence of a local
minimum. Motivated by these arguments, we propose a new approach to
second-order optimization, the saddle-free Newton method, that can rapidly
escape high dimensional saddle points, unlike gradient descent and quasi-Newton
methods. We apply this algorithm to deep or recurrent neural network training,
and provide numerical evidence for its superior optimization performance.Comment: The theoretical review and analysis in this article draw heavily from
arXiv:1405.4604 [cs.LG
Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
Given a non-convex twice differentiable cost function f, we prove that the
set of initial conditions so that gradient descent converges to saddle points
where \nabla^2 f has at least one strictly negative eigenvalue has (Lebesgue)
measure zero, even for cost functions f with non-isolated critical points,
answering an open question in [Lee, Simchowitz, Jordan, Recht, COLT2016].
Moreover, this result extends to forward-invariant convex subspaces, allowing
for weak (non-globally Lipschitz) smoothness assumptions. Finally, we produce
an upper bound on the allowable step-size.Comment: 2 figure
- …