23 research outputs found
Multilevel Minimization for Deep Residual Networks
We present a new multilevel minimization framework for the training of deep
residual networks (ResNets), which has the potential to significantly reduce
training time and effort. Our framework is based on the dynamical system's
viewpoint, which formulates a ResNet as the discretization of an initial value
problem. The training process is then formulated as a time-dependent optimal
control problem, which we discretize using different time-discretization
parameters, eventually generating multilevel-hierarchy of auxiliary networks
with different resolutions. The training of the original ResNet is then
enhanced by training the auxiliary networks with reduced resolutions. By
design, our framework is conveniently independent of the choice of the training
strategy chosen on each level of the multilevel hierarchy. By means of
numerical examples, we analyze the convergence behavior of the proposed method
and demonstrate its robustness. For our examples we employ a multilevel
gradient-based methods. Comparisons with standard single level methods show a
speedup of more than factor three while achieving the same validation accuracy
Stochastic Training of Neural Networks via Successive Convex Approximations
This paper proposes a new family of algorithms for training neural networks
(NNs). These are based on recent developments in the field of non-convex
optimization, going under the general name of successive convex approximation
(SCA) techniques. The basic idea is to iteratively replace the original
(non-convex, highly dimensional) learning problem with a sequence of (strongly
convex) approximations, which are both accurate and simple to optimize.
Differently from similar ideas (e.g., quasi-Newton algorithms), the
approximations can be constructed using only first-order information of the
neural network function, in a stochastic fashion, while exploiting the overall
structure of the learning problem for a faster convergence. We discuss several
use cases, based on different choices for the loss function (e.g., squared loss
and cross-entropy loss), and for the regularization of the NN's weights. We
experiment on several medium-sized benchmark problems, and on a large-scale
dataset involving simulated physical data. The results show how the algorithm
outperforms state-of-the-art techniques, providing faster convergence to a
better minimum. Additionally, we show how the algorithm can be easily
parallelized over multiple computational units without hindering its
performance. In particular, each computational unit can optimize a tailored
surrogate function defined on a randomly assigned subset of the input
variables, whose dimension can be selected depending entirely on the available
computational power.Comment: Preprint submitted to IEEE Transactions on Neural Networks and
Learning System