3 research outputs found
Distributed Online Optimization via Gradient Tracking with Adaptive Momentum
This paper deals with a network of computing agents aiming to solve an online
optimization problem in a distributed fashion, i.e., by means of local
computation and communication, without any central coordinator. We propose the
gradient tracking with adaptive momentum estimation (GTAdam) distributed
algorithm, which combines a gradient tracking mechanism with first and second
order momentum estimates of the gradient. The algorithm is analyzed in the
online setting for strongly convex and smooth cost functions. We prove that the
average dynamic regret is bounded and that the convergence rate is linear. The
algorithm is tested on a time-varying classification problem, on a (moving)
target localization problem and in a stochastic optimization setup from image
classification. In these numerical experiments from multi-agent learning,
GTAdam outperforms state-of-the-art distributed optimization methods