1 research outputs found
Ill-Posedness and Optimization Geometry for Nonlinear Neural Network Training
In this work we analyze the role nonlinear activation functions play at
stationary points of dense neural network training problems. We consider a
generic least squares loss function training formulation. We show that the
nonlinear activation functions used in the network construction play a critical
role in classifying stationary points of the loss landscape. We show that for
shallow dense networks, the nonlinear activation function determines the
Hessian nullspace in the vicinity of global minima (if they exist), and
therefore determines the ill-posedness of the training problem. Furthermore,
for shallow nonlinear networks we show that the zeros of the activation
function and its derivatives can lead to spurious local minima, and discuss
conditions for strict saddle points. We extend these results to deep dense
neural networks, showing that the last activation function plays an important
role in classifying stationary points, due to how it shows up in the gradient
from the chain rule