704 research outputs found
Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees
Asynchronous distributed algorithms are a popular way to reduce
synchronization costs in large-scale optimization, and in particular for neural
network training. However, for nonsmooth and nonconvex objectives, few
convergence guarantees exist beyond cases where closed-form proximal operator
solutions are available. As most popular contemporary deep neural networks lead
to nonsmooth and nonconvex objectives, there is now a pressing need for such
convergence guarantees. In this paper, we analyze for the first time the
convergence of stochastic asynchronous optimization for this general class of
objectives. In particular, we focus on stochastic subgradient methods allowing
for block variable partitioning, where the shared-memory-based model is
asynchronously updated by concurrent processes. To this end, we first introduce
a probabilistic model which captures key features of real asynchronous
scheduling between concurrent processes; under this model, we establish
convergence with probability one to an invariant set for stochastic subgradient
methods with momentum.
From the practical perspective, one issue with the family of methods we
consider is that it is not efficiently supported by machine learning
frameworks, as they mostly focus on distributed data-parallel strategies. To
address this, we propose a new implementation strategy for shared-memory based
training of deep neural networks, whereby concurrent parameter servers are
utilized to train a partitioned but shared model in single- and multi-GPU
settings. Based on this implementation, we achieve on average 1.2x speed-up in
comparison to state-of-the-art training methods for popular image
classification tasks without compromising accuracy
Catalyst Acceleration for Gradient-Based Non-Convex Optimization
We introduce a generic scheme to solve nonconvex optimization problems using
gradient-based algorithms originally designed for minimizing convex functions.
Even though these methods may originally require convexity to operate, the
proposed approach allows one to use them on weakly convex objectives, which
covers a large class of non-convex functions typically appearing in machine
learning and signal processing. In general, the scheme is guaranteed to produce
a stationary point with a worst-case efficiency typical of first-order methods,
and when the objective turns out to be convex, it automatically accelerates in
the sense of Nesterov and achieves near-optimal convergence rate in function
values. These properties are achieved without assuming any knowledge about the
convexity of the objective, by automatically adapting to the unknown weak
convexity constant. We conclude the paper by showing promising experimental
results obtained by applying our approach to incremental algorithms such as
SVRG and SAGA for sparse matrix factorization and for learning neural networks
Deterministic Nonsmooth Nonconvex Optimization
We study the complexity of optimizing nonsmooth nonconvex Lipschitz functions
by producing -stationary points. Several recent works have
presented randomized algorithms that produce such points using first-order oracle calls, independent of the
dimension . It has been an open problem as to whether a similar result can
be obtained via a deterministic algorithm. We resolve this open problem,
showing that randomization is necessary to obtain a dimension-free rate. In
particular, we prove a lower bound of for any deterministic
algorithm. Moreover, we show that unlike smooth or convex optimization, access
to function values is required for any deterministic algorithm to halt within
any finite time.
On the other hand, we prove that if the function is even slightly smooth,
then the dimension-free rate of can be
obtained by a deterministic algorithm with merely a logarithmic dependence on
the smoothness parameter. Motivated by these findings, we turn to study the
complexity of deterministically smoothing Lipschitz functions. Though there are
efficient black-box randomized smoothings, we start by showing that no such
deterministic procedure can smooth functions in a meaningful manner, resolving
an open question. We then bypass this impossibility result for the structured
case of ReLU neural networks. To that end, in a practical white-box setting in
which the optimizer is granted access to the network's architecture, we propose
a simple, dimension-free, deterministic smoothing that provably preserves
-stationary points. Our method applies to a variety of
architectures of arbitrary depth, including ResNets and ConvNets. Combined with
our algorithm, this yields the first deterministic dimension-free algorithm for
optimizing ReLU networks, circumventing our lower bound.Comment: This work supersedes arxiv:2209.12463 and arxiv:2209.10346[Section
3], with major additional result
- …