Accelerated Multi-Agent Optimization Method over Stochastic Networks

Abstract

We propose a distributed method to solve a multi-agent optimization problem with strongly convex cost function and equality coupling constraints. The method is based on Nesterov's accelerated gradient approach and works over stochastically time-varying communication networks. We consider the standard assumptions of Nesterov's method and show that the sequence of the expected dual values converge toward the optimal value with the rate of O(1/k2)\mathcal{O}(1/k^2). Furthermore, we provide a simulation study of solving an optimal power flow problem with a well-known benchmark case.Comment: to appear at the 59th Conference on Decision and Contro

    Similar works