1 research outputs found
Escaping Saddle-Points Faster under Interpolation-like Conditions
In this paper, we show that under over-parametrization several standard
stochastic optimization algorithms escape saddle-points and converge to
local-minimizers much faster. One of the fundamental aspects of
over-parametrized models is that they are capable of interpolating the training
data. We show that, under interpolation-like assumptions satisfied by the
stochastic gradients in an over-parametrization setting, the first-order oracle
complexity of Perturbed Stochastic Gradient Descent (PSGD) algorithm to reach
an -local-minimizer, matches the corresponding deterministic rate of
. We next analyze Stochastic
Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions,
and show that the oracle complexity to reach an -local-minimizer
under interpolation-like conditions, is
. While this obtained complexity is
better than the corresponding complexity of either PSGD, or SCRN without
interpolation-like assumptions, it does not match the rate of
corresponding to deterministic
Cubic-Regularized Newton method. It seems further Hessian-based
interpolation-like assumptions are necessary to bridge this gap. We also
discuss the corresponding improved complexities in the zeroth-order settings.Comment: To appear in NeurIPS, 202