44 research outputs found
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution
This work characterizes the effect of depth on the optimization landscape of
linear regression, showing that, despite their nonconvexity, deeper models have
more desirable optimization landscape. We consider a robust and
over-parameterized setting, where a subset of measurements are grossly
corrupted with noise and the true linear model is captured via an -layer
linear neural network. On the negative side, we show that this problem
\textit{does not} have a benign landscape: given any , with constant
probability, there exists a solution corresponding to the ground truth that is
neither local nor global minimum. However, on the positive side, we prove that,
for any -layer model with , a simple sub-gradient method becomes
oblivious to such ``problematic'' solutions; instead, it converges to a
balanced solution that is not only close to the ground truth but also enjoys a
flat local landscape, thereby eschewing the need for "early stopping". Lastly,
we empirically verify that the desirable optimization landscape of deeper
models extends to other robust learning tasks, including deep matrix recovery
and deep ReLU networks with -loss