53,207 research outputs found
Distributed optimization over time-varying directed graphs
We consider distributed optimization by a collection of nodes, each having
access to its own convex function, whose collective goal is to minimize the sum
of the functions. The communications between nodes are described by a
time-varying sequence of directed graphs, which is uniformly strongly
connected. For such communications, assuming that every node knows its
out-degree, we develop a broadcast-based algorithm, termed the
subgradient-push, which steers every node to an optimal value under a standard
assumption of subgradient boundedness. The subgradient-push requires no
knowledge of either the number of agents or the graph sequence to implement.
Our analysis shows that the subgradient-push algorithm converges at a rate of
, where the constant depends on the initial values at the
nodes, the subgradient norms, and, more interestingly, on both the consensus
speed and the imbalances of influence among the nodes
Distributed Dictionary Learning
The paper studies distributed Dictionary Learning (DL) problems where the
learning task is distributed over a multi-agent network with time-varying
(nonsymmetric) connectivity. This formulation is relevant, for instance, in
big-data scenarios where massive amounts of data are collected/stored in
different spatial locations and it is unfeasible to aggregate and/or process
all the data in a fusion center, due to resource limitations, communication
overhead or privacy considerations. We develop a general distributed
algorithmic framework for the (nonconvex) DL problem and establish its
asymptotic convergence. The new method hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a gradient tracking mechanism
instrumental to locally estimate the missing global information; and ii) a
consensus step, as a mechanism to distribute the computations among the agents.
To the best of our knowledge, this is the first distributed algorithm with
provable convergence for the DL problem and, more in general, bi-convex
optimization problems over (time-varying) directed graphs
Projected Push-Pull For Distributed Constrained Optimization Over Time-Varying Directed Graphs (extended version)
We introduce the Projected Push-Pull algorithm that enables multiple agents
to solve a distributed constrained optimization problem with private cost
functions and global constraints, in a collaborative manner. Our algorithm
employs projected gradient descent to deal with constraints and a lazy update
rule to control the trade-off between the consensus and optimization steps in
the protocol. We prove that our algorithm achieves geometric convergence over
time-varying directed graphs while ensuring that the decision variable always
stays within the constraint set. We derive explicit bounds for step sizes that
guarantee geometric convergence based on the strong-convexity and smoothness of
cost functions, and graph properties. Moreover, we provide additional
theoretical results on the usefulness of lazy updates, revealing the challenges
in the analysis of any gradient tracking method that uses projection operators
in a distributed constrained optimization setting. We validate our theoretical
results with numerical studies over different graph types, showing that our
algorithm achieves geometric convergence empirically.Comment: 16 pages, 2 figure
An Analysis Tool for Push-Sum Based Distributed Optimization
The push-sum algorithm is probably the most important distributed averaging
approach over directed graphs, which has been applied to various problems
including distributed optimization. This paper establishes the explicit
absolute probability sequence for the push-sum algorithm, and based on which,
constructs quadratic Lyapunov functions for push-sum based distributed
optimization algorithms. As illustrative examples, the proposed novel analysis
tool can improve the convergence rates of the subgradient-push and stochastic
gradient-push, two important algorithms for distributed convex optimization
over unbalanced directed graphs. Specifically, the paper proves that the
subgradient-push algorithm converges at a rate of for general
convex functions and stochastic gradient-push algorithm converges at a rate of
for strongly convex functions, over time-varying unbalanced directed
graphs. Both rates are respectively the same as the state-of-the-art rates of
their single-agent counterparts and thus optimal, which closes the theoretical
gap between the centralized and push-sum based (sub)gradient methods. The paper
further proposes a heterogeneous push-sum based subgradient algorithm in which
each agent can arbitrarily switch between subgradient-push and
push-subgradient. The heterogeneous algorithm thus subsumes both
subgradient-push and push-subgradient as special cases, and still converges to
an optimal point at an optimal rate. The proposed tool can also be extended to
analyze distributed weighted averaging.Comment: arXiv admin note: substantial text overlap with arXiv:2203.16623,
arXiv:2303.1706
Discretized Distributed Optimization over Dynamic Digraphs
We consider a discrete-time model of continuous-time distributed optimization
over dynamic directed-graphs (digraphs) with applications to distributed
learning. Our optimization algorithm works over general strongly connected
dynamic networks under switching topologies, e.g., in mobile multi-agent
systems and volatile networks due to link failures. Compared to many existing
lines of work, there is no need for bi-stochastic weight designs on the links.
The existing literature mostly needs the link weights to be stochastic using
specific weight-design algorithms needed both at the initialization and at all
times when the topology of the network changes. This paper eliminates the need
for such algorithms and paves the way for distributed optimization over
time-varying digraphs. We derive the bound on the gradient-tracking step-size
and discrete time-step for convergence and prove dynamic stability using
arguments from consensus algorithms, matrix perturbation theory, and Lyapunov
theory. This work, particularly, is an improvement over existing
stochastic-weight undirected networks in case of link removal or packet drops.
This is because the existing literature may need to rerun time-consuming and
computationally complex algorithms for stochastic design, while the proposed
strategy works as long as the underlying network is weight-symmetric and
balanced. The proposed optimization framework finds applications to distributed
classification and learning
- …