Maximal Dissent: a State-Dependent Way to Agree in Distributed Convex Optimization

Abstract

Consider a set of agents collaboratively solving a distributed convex optimization problem, asynchronously, under stringent communication constraints. In such situations, when an agent is activated and is allowed to communicate with only one of its neighbors, we would like to pick the one holding the most informative local estimate. We propose new algorithms where the agents with maximal dissent average their estimates, leading to an information mixing mechanism that often displays faster convergence to an optimal solution compared to randomized gossip. The core idea is that when two neighboring agents whose distance between local estimates is the largest among all neighboring agents in the network average their states, it leads to the largest possible immediate reduction of the quadratic Lyapunov function used to establish convergence to the set of optimal solutions. As a broader contribution, we prove the convergence of max-dissent subgradient methods using a unified framework that can be used for other state-dependent distributed optimization algorithms. Our proof technique bypasses the need of establishing the information flow between any two agents within a time interval of uniform length by intelligently studying convergence properties of the Lyapunov function used in our analysis

    Similar works

    Full text

    thumbnail-image

    Available Versions