1 research outputs found
Linearly Convergent Algorithm with Variance Reduction for Distributed Stochastic Optimization
This paper considers a distributed stochastic strongly convex optimization,
where agents connected over a network aim to cooperatively minimize the average
of all agents' local cost functions. Due to the stochasticity of gradient
estimation and distributedness of local objective, fast linearly convergent
distributed algorithms have not been achieved yet. This work proposes a novel
distributed stochastic gradient tracking algorithm with variance reduction,
where the local gradients are estimated by an increasing batch-size of sampled
gradients. With an undirected connected communication graph and a geometrically
increasing batch-size, the iterates are shown to converge in mean to the
optimal solution at a geometric rate (achieving linear convergence). The
iteration, communication, and oracle complexity for obtaining an
-optimal solution are established as well. Particulary, the
communication complexity is while the oracle
complexity (number of sampled gradients) is , which
is of the same order as that of centralized approaches.
Hence, the proposed scheme is communication-efficient without requiring extra
sampled gradients. Numerical simulations are given to demonstrate the theoretic
results.Comment: 8 pages,3 figure, the paper will be presented on European Control
Conference,202