Stochastic Gradient Descent (SGD) is an out-of-equilibrium algorithm used
extensively to train artificial neural networks. However very little is known
on to what extent SGD is crucial for to the success of this technology and, in
particular, how much it is effective in optimizing high-dimensional non-convex
cost functions as compared to other optimization algorithms such as Gradient
Descent (GD). In this work we leverage dynamical mean field theory to analyze
exactly its performances in the high-dimensional limit. We consider the problem
of recovering a hidden high-dimensional non-linearly encrypted signal, a
prototype high-dimensional non-convex hard optimization problem. We compare the
performances of SGD to GD and we show that SGD largely outperforms GD. In
particular, a power law fit of the relaxation time of these algorithms shows
that the recovery threshold for SGD with small batch size is smaller than the
corresponding one of GD.Comment: 5 pages + appendix. 3 figure