7 research outputs found
Delayed Stochastic Algorithms for Distributed Weakly Convex Optimization
This paper studies delayed stochastic algorithms for weakly convex
optimization in a distributed network with workers connected to a master node.
More specifically, we consider a structured stochastic weakly convex objective
function which is the composition of a convex function and a smooth nonconvex
function. Recently, Xu et al. 2022 showed that an inertial stochastic
subgradient method converges at a rate of , which
suffers a significant penalty from the maximum information delay . To
alleviate this issue, we propose a new delayed stochastic prox-linear
() method in which the master performs the proximal update of
the parameters and the workers only need to linearly approximate the inner
smooth function. Somewhat surprisingly, we show that the delays only affect the
high order term in the complexity rate and hence, are negligible after a
certain number of iterations. Moreover, to further improve the
empirical performance, we propose a delayed extrapolated prox-linear
() method which employs Polyak-type momentum to speed up the
algorithm convergence. Building on the tools for analyzing , we
also develop improved analysis of delayed stochastic subgradient method
(). In particular, for general weakly convex problems, we show
that convergence of only depends on the expected delay