94 research outputs found
Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning
For nonconvex optimization in machine learning, this article proves that
every local minimum achieves the globally optimal value of the perturbable
gradient basis model at any differentiable point. As a result, nonconvex
machine learning is theoretically as supported as convex machine learning with
a handcrafted basis in terms of the loss at differentiable local minima, except
in the case when a preference is given to the handcrafted basis over the
perturbable gradient basis. The proofs of these results are derived under mild
assumptions. Accordingly, the proven results are directly applicable to many
machine learning models, including practical deep neural networks, without any
modification of practical methods. Furthermore, as special cases of our general
results, this article improves or complements several state-of-the-art
theoretical results on deep neural networks, deep residual networks, and
overparameterized deep neural networks with a unified proof technique and novel
geometric insights. A special case of our results also contributes to the
theoretical foundation of representation learning.Comment: Neural computation, MIT pres
- …