2,433 research outputs found
An Analysis Tool for Push-Sum Based Distributed Optimization
The push-sum algorithm is probably the most important distributed averaging
approach over directed graphs, which has been applied to various problems
including distributed optimization. This paper establishes the explicit
absolute probability sequence for the push-sum algorithm, and based on which,
constructs quadratic Lyapunov functions for push-sum based distributed
optimization algorithms. As illustrative examples, the proposed novel analysis
tool can improve the convergence rates of the subgradient-push and stochastic
gradient-push, two important algorithms for distributed convex optimization
over unbalanced directed graphs. Specifically, the paper proves that the
subgradient-push algorithm converges at a rate of for general
convex functions and stochastic gradient-push algorithm converges at a rate of
for strongly convex functions, over time-varying unbalanced directed
graphs. Both rates are respectively the same as the state-of-the-art rates of
their single-agent counterparts and thus optimal, which closes the theoretical
gap between the centralized and push-sum based (sub)gradient methods. The paper
further proposes a heterogeneous push-sum based subgradient algorithm in which
each agent can arbitrarily switch between subgradient-push and
push-subgradient. The heterogeneous algorithm thus subsumes both
subgradient-push and push-subgradient as special cases, and still converges to
an optimal point at an optimal rate. The proposed tool can also be extended to
analyze distributed weighted averaging.Comment: arXiv admin note: substantial text overlap with arXiv:2203.16623,
arXiv:2303.1706
FROST -- Fast row-stochastic optimization with uncoordinated step-sizes
In this paper, we discuss distributed optimization over directed graphs,
where doubly-stochastic weights cannot be constructed. Most of the existing
algorithms overcome this issue by applying push-sum consensus, which utilizes
column-stochastic weights. The formulation of column-stochastic weights
requires each agent to know (at least) its out-degree, which may be impractical
in e.g., broadcast-based communication protocols. In contrast, we describe
FROST (Fast Row-stochastic-Optimization with uncoordinated STep-sizes), an
optimization algorithm applicable to directed graphs that does not require the
knowledge of out-degrees; the implementation of which is straightforward as
each agent locally assigns weights to the incoming information and locally
chooses a suitable step-size. We show that FROST converges linearly to the
optimal solution for smooth and strongly-convex functions given that the
largest step-size is positive and sufficiently small.Comment: Submitted for journal publication, currently under revie
Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks
We study nonconvex distributed optimization in multiagent networks where the
communications between nodes is modeled as a time-varying sequence of arbitrary
digraphs. We introduce a novel broadcast-based distributed algorithmic
framework for the (constrained) minimization of the sum of a smooth (possibly
nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a
convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually
employed to enforce some structure in the solution, typically sparsity. The
proposed method hinges on Successive Convex Approximation (SCA) techniques
coupled with i) a tracking mechanism instrumental to locally estimate the
gradients of agents' cost functions; and ii) a novel broadcast protocol to
disseminate information and distribute the computation among the agents.
Asymptotic convergence to stationary solutions is established. A key feature of
the proposed algorithm is that it neither requires the double-stochasticity of
the consensus matrices (but only column stochasticity) nor the knowledge of the
graph sequence to implement. To the best of our knowledge, the proposed
framework is the first broadcast-based distributed algorithm for convex and
nonconvex constrained optimization over arbitrary, time-varying digraphs.
Numerical results show that our algorithm outperforms current schemes on both
convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual
Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA,
US
- …