1,534 research outputs found
Parallel and distributed optimization methods for estimation and control in networks
System performance for networks composed of interconnected subsystems can be
increased if the traditionally separated subsystems are jointly optimized.
Recently, parallel and distributed optimization methods have emerged as a
powerful tool for solving estimation and control problems in large-scale
networked systems. In this paper we review and analyze the
optimization-theoretic concepts of parallel and distributed methods for solving
coupled optimization problems and demonstrate how several estimation and
control problems related to complex networked systems can be formulated in
these settings. The paper presents a systematic framework for exploiting the
potential of the decomposition structures as a way to obtain different parallel
algorithms, each with a different tradeoff among convergence speed, message
passing amount and distributed computation architecture. Several specific
applications from estimation and process control are included to demonstrate
the power of the approach.Comment: 36 page
Distributed Subgradient Projection Algorithm over Directed Graphs
We propose a distributed algorithm, termed the Directed-Distributed Projected
Subgradient (D-DPS), to solve a constrained optimization problem over a
multi-agent network, where the goal of agents is to collectively minimize the
sum of locally known convex functions. Each agent in the network owns only its
local objective function, constrained to a commonly known convex set. We focus
on the circumstance when communications between agents are described by a
directed network. The D-DPS augments an additional variable for each agent, to
overcome the asymmetry caused by the directed communication network. The
convergence analysis shows that D-DPS converges at a rate of , where k is the number of iterations
Distributed Subgradient Projection Algorithm over Directed Graphs: Alternate Proof
We propose Directed-Distributed Projected Subgradient (D-DPS) to solve a
constrained optimization problem over a multi-agent network, where the goal of
agents is to collectively minimize the sum of locally known convex functions.
Each agent in the network owns only its local objective function, constrained
to a commonly known convex set. We focus on the circumstance when
communications between agents are described by a \emph{directed} network. The
D-DPS combines surplus consensus to overcome the asymmetry caused by the
directed communication network. The analysis shows the convergence rate to be
.Comment: Disclaimer: This manuscript provides an alternate approach to prove
the results in \textit{C. Xi and U. A. Khan, Distributed Subgradient
Projection Algorithm over Directed Graphs, in IEEE Transactions on Automatic
Control}. The changes, colored in blue, result into a tighter result in
Theorem~1". arXiv admin note: text overlap with arXiv:1602.0065
Fenchel Dual Gradient Methods for Distributed Convex Optimization over Time-varying Networks
In the large collection of existing distributed algorithms for convex
multi-agent optimization, only a handful of them provide convergence rate
guarantees on agent networks with time-varying topologies, which, however,
restrict the problem to be unconstrained. Motivated by this, we develop a
family of distributed Fenchel dual gradient methods for solving constrained,
strongly convex but not necessarily smooth multi-agent optimization problems
over time-varying undirected networks. The proposed algorithms are constructed
based on the application of weighted gradient methods to the Fenchel dual of
the multi-agent optimization problem, and can be implemented in a fully
decentralized fashion. We show that the proposed algorithms drive all the
agents to both primal and dual optimality asymptotically under a minimal
connectivity condition and at sublinear rates under a standard connectivity
condition. Finally, the competent convergence performance of the distributed
Fenchel dual gradient methods is demonstrated via simulations
Fast Convergence Rates of Distributed Subgradient Methods with Adaptive Quantization
We study distributed optimization problems over a network when the
communication between the nodes is constrained, and so information that is
exchanged between the nodes must be quantized. Recent advances using the
distributed gradient algorithm with a quantization scheme at a fixed resolution
have established convergence, but at rates significantly slower than when the
communications are unquantized.
In this paper, we introduce a novel quantization method, which we refer to as
adaptive quantization, that allows us to match the convergence rates under
perfect communications. Our approach adjusts the quantization scheme used by
each node as the algorithm progresses: as we approach the solution, we become
more certain about where the state variables are localized, and adapt the
quantizer codebook accordingly.
We bound the convergence rates of the proposed method as a function of the
communication bandwidth, the underlying network topology, and structural
properties of the constituent objective functions. In particular, we show that
if the objective functions are convex or strongly convex, then using adaptive
quantization does not affect the rate of convergence of the distributed
subgradient methods when the communications are quantized, except for a
constant that depends on the resolution of the quantizer. To the best of our
knowledge, the rates achieved in this paper are better than any existing work
in the literature for distributed gradient methods under finite communication
bandwidths. We also provide numerical simulations that compare convergence
properties of the distributed gradient methods with and without quantization
for solving distributed regression problems for both quadratic and absolute
loss functions.Comment: arXiv admin note: text overlap with arXiv:1810.1156
Accelerated Distributed Dual Averaging over Evolving Networks of Growing Connectivity
We consider the problem of accelerating distributed optimization in
multi-agent networks by sequentially adding edges. Specifically, we extend the
distributed dual averaging (DDA) subgradient algorithm to evolving networks of
growing connectivity and analyze the corresponding improvement in convergence
rate. It is known that the convergence rate of DDA is influenced by the
algebraic connectivity of the underlying network, where better connectivity
leads to faster convergence. However, the impact of network topology design on
the convergence rate of DDA has not been fully understood. In this paper, we
begin by designing network topologies via edge selection and scheduling. For
edge selection, we determine the best set of candidate edges that achieves the
optimal tradeoff between the growth of network connectivity and the usage of
network resources. The dynamics of network evolution is then incurred by edge
scheduling. Further, we provide a tractable approach to analyze the improvement
in the convergence rate of DDA induced by the growth of network connectivity.
Our analysis reveals the connection between network topology design and the
convergence rate of DDA, and provides quantitative evaluation of DDA
acceleration for distributed optimization that is absent in the existing
analysis. Lastly, numerical experiments show that DDA can be significantly
accelerated using a sequence of well-designed networks, and our theoretical
predictions are well matched to its empirical convergence behavior
A unitary distributed subgradient method for multi-agent optimization with different coupling sources
In this work, we first consider distributed convex constrained optimization
problems where the objective function is encoded by multiple local and possibly
nonsmooth objectives privately held by a group of agents, and propose a
distributed subgradient method with double averaging (abbreviated as ) that only requires peer-to-peer communication and local computation to
solve the global problem. The algorithmic framework builds on dual methods and
dynamic average consensus; the sequence of test points is formed by iteratively
minimizing a local dual model of the overall objective where the coefficients,
i.e., approximated subgradients of the objective, are supplied by the dynamic
average consensus scheme. We theoretically show that enjoys
non-ergodic convergence properties, i.e., the local minimizing sequence itself
is convergent, a distinct feature that cannot be found in existing results.
Specifically, we establish a convergence rate of in
terms of objective function error. Then, extensions are made to tackle
distributed optimization problems with coupled functional constraints by
combining and dual decomposition. This is made possible by
Lagrangian relaxation that transforms the coupling in constraints of the primal
problem into that in cost functions of the dual, thus allowing us to solve the
dual problem via . Both the dual objective error and the quadratic
penalty for the coupled constraint are proved to converge at a rate of
, and the primal objective error asymptotically
vanishes. Numerical experiments and comparisons are conducted to illustrate the
advantage of the proposed algorithms and validate our theoretical findings.Comment: 15 pages, 2 figure
Approximate Projection Methods for Decentralized Optimization with Functional Constraints
We consider distributed convex optimization problems that involve a separable
objective function and nontrivial functional constraints, such as Linear Matrix
Inequalities (LMIs). We propose a decentralized and computationally inexpensive
algorithm which is based on the concept of approximate projections. Our
algorithm is one of the consensus based methods in that, at every iteration,
each agent performs a consensus update of its decision variables followed by an
optimization step of its local objective function and local constraints. Unlike
other methods, the last step of our method is not an Euclidean projection onto
the feasible set, but instead a subgradient step in the direction that
minimizes the local constraint violation. We propose two different averaging
schemes to mitigate the disagreements among the agents' local estimates. We
show that the algorithms converge almost surely, i.e., every agent agrees on
the same optimal solution, under the assumption that the objective functions
and constraint functions are nondifferentiable and their subgradients are
bounded. We provide simulation results on a decentralized optimal gossip
averaging problem, which involves SDP constraints, to complement our
theoretical results
Subgradient-Free Stochastic Optimization Algorithm for Non-smooth Convex Functions over Time-Varying Networks
In this paper we consider a distributed stochastic optimization problem
without the gradient/subgradient information for the local objective functions,
subject to local convex constraints. The objective functions may be non-smooth
and observed with stochastic noises, and the network for the distributed design
is time-varying. By adding the stochastic dithers into the local objective
functions and constructing the randomized differences motivated by the
Kiefer-Wolfowitz algorithm, we propose a distributed subgradient-free algorithm
to find the global minimizer with local observations. Moreover, we prove that
the consensus of estimates and global minimization can be achieved with
probability one over the time-varying network, and then obtain the convergence
rate of the mean average of estimates as well. Finally, we give a numerical
example to illustrate the effectiveness of the proposed algorithm
Graph Balancing for Distributed Subgradient Methods over Directed Graphs
We consider a multi agent optimization problem where a set of agents
collectively solves a global optimization problem with the objective function
given by the sum of locally known convex functions. We focus on the case when
information exchange among agents takes place over a directed network and
propose a distributed subgradient algorithm in which each agent performs local
processing based on information obtained from his incoming neighbors. Our
algorithm uses weight balancing to overcome the asymmetries caused by the
directed communication network, i.e., agents scale their outgoing information
with dynamically updated weights that converge to balancing weights of the
graph. We show that both the objective function values and the consensus
violation, at the ergodic average of the estimates generated by the algorithm,
converge with rate , where is the number of
iterations. A special case of our algorithm provides a new distributed method
to compute average consensus over directed graphs
- β¦