1,315 research outputs found
Subgradient-Free Stochastic Optimization Algorithm for Non-smooth Convex Functions over Time-Varying Networks
In this paper we consider a distributed stochastic optimization problem
without the gradient/subgradient information for the local objective functions,
subject to local convex constraints. The objective functions may be non-smooth
and observed with stochastic noises, and the network for the distributed design
is time-varying. By adding the stochastic dithers into the local objective
functions and constructing the randomized differences motivated by the
Kiefer-Wolfowitz algorithm, we propose a distributed subgradient-free algorithm
to find the global minimizer with local observations. Moreover, we prove that
the consensus of estimates and global minimization can be achieved with
probability one over the time-varying network, and then obtain the convergence
rate of the mean average of estimates as well. Finally, we give a numerical
example to illustrate the effectiveness of the proposed algorithm
Distributed Subgradient Projection Algorithm over Directed Graphs: Alternate Proof
We propose Directed-Distributed Projected Subgradient (D-DPS) to solve a
constrained optimization problem over a multi-agent network, where the goal of
agents is to collectively minimize the sum of locally known convex functions.
Each agent in the network owns only its local objective function, constrained
to a commonly known convex set. We focus on the circumstance when
communications between agents are described by a \emph{directed} network. The
D-DPS combines surplus consensus to overcome the asymmetry caused by the
directed communication network. The analysis shows the convergence rate to be
.Comment: Disclaimer: This manuscript provides an alternate approach to prove
the results in \textit{C. Xi and U. A. Khan, Distributed Subgradient
Projection Algorithm over Directed Graphs, in IEEE Transactions on Automatic
Control}. The changes, colored in blue, result into a tighter result in
Theorem~1". arXiv admin note: text overlap with arXiv:1602.0065
Privacy Preservation in Distributed Subgradient Optimization Algorithms
Privacy preservation is becoming an increasingly important issue in data
mining and machine learning. In this paper, we consider the privacy preserving
features of distributed subgradient optimization algorithms. We first show that
a well-known distributed subgradient synchronous optimization algorithm, in
which all agents make their optimization updates simultaneously at all times,
is not privacy preserving in the sense that the malicious agent can learn other
agents' subgradients asymptotically. Then we propose a distributed subgradient
projection asynchronous optimization algorithm without relying on any existing
privacy preservation technique, where agents can exchange data between
neighbors directly. In contrast to synchronous algorithms, in the new
asynchronous algorithm agents make their optimization updates asynchronously.
The introduced projection operation and asynchronous optimization mechanism can
guarantee that the proposed asynchronous optimization algorithm is privacy
preserving. Moreover, we also establish the optimal convergence of the newly
proposed algorithm. The proposed privacy preservation techniques shed light on
developing other privacy preserving distributed optimization algorithms
Distributed Discrete-time Optimization in Multi-agent Networks Using only Sign of Relative State
This paper proposes distributed discrete-time algorithms to cooperatively
solve an additive cost optimization problem in multi-agent networks. The
striking feature lies in the use of only the sign of relative state information
between neighbors, which substantially differentiates our algorithms from
others in the existing literature. We first interpret the proposed algorithms
in terms of the penalty method in optimization theory and then perform
non-asymptotic analysis to study convergence for static network graphs.
Compared with the celebrated distributed subgradient algorithms, which however
use the exact relative state information, the convergence speed is essentially
not affected by the loss of information. We also study how introducing noise
into the relative state information and randomly activated graphs affect the
performance of our algorithms. Finally, we validate the theoretical results on
a class of distributed quantile regression problems.Comment: Part of this work has been presented in American Control Conference
(ACC) 2018, first version posted on arxiv on Sep. 2017, IEEE Transactions on
Automatic Control, 201
Graph Balancing for Distributed Subgradient Methods over Directed Graphs
We consider a multi agent optimization problem where a set of agents
collectively solves a global optimization problem with the objective function
given by the sum of locally known convex functions. We focus on the case when
information exchange among agents takes place over a directed network and
propose a distributed subgradient algorithm in which each agent performs local
processing based on information obtained from his incoming neighbors. Our
algorithm uses weight balancing to overcome the asymmetries caused by the
directed communication network, i.e., agents scale their outgoing information
with dynamically updated weights that converge to balancing weights of the
graph. We show that both the objective function values and the consensus
violation, at the ergodic average of the estimates generated by the algorithm,
converge with rate , where is the number of
iterations. A special case of our algorithm provides a new distributed method
to compute average consensus over directed graphs
Distributed Convex Optimization With Coupling Constraints Over Time-Varying Directed Graphs
This paper considers a distributed convex optimization problem over a
time-varying multi-agent network, where each agent has its own decision
variables that should be set so as to minimize its individual objective subject
to local constraints and global coupling equality constraints. Over directed
graphs, a distributed algorithm is proposed that incorporates the push-sum
protocol into dual subgradient methods. Under the convexity assumption, the
optimality of primal and dual variables, and constraint violations is first
established. Then the explicit convergence rates of the proposed algorithm are
obtained. Finally, some numerical experiments on the economic dispatch problem
are provided to demonstrate the efficacy of the proposed algorithm
Distributed Multi-Agent Optimization with State-Dependent Communication
We study distributed algorithms for solving global optimization problems in
which the objective function is the sum of local objective functions of agents
and the constraint set is given by the intersection of local constraint sets of
agents. We assume that each agent knows only his own local objective function
and constraint set, and exchanges information with the other agents over a
randomly varying network topology to update his information state. We assume a
state-dependent communication model over this topology: communication is
Markovian with respect to the states of the agents and the probability with
which the links are available depends on the states of the agents. In this
paper, we study a projected multi-agent subgradient algorithm under
state-dependent communication. The algorithm involves each agent performing a
local averaging to combine his estimate with the other agents' estimates,
taking a subgradient step along his local objective function, and projecting
the estimates on his local constraint set. The state-dependence of the
communication introduces significant challenges and couples the study of
information exchange with the analysis of subgradient steps and projection
errors. We first show that the multi-agent subgradient algorithm when used with
a constant stepsize may result in the agent estimates to diverge with
probability one. Under some assumptions on the stepsize sequence, we provide
convergence rate bounds on a "disagreement metric" between the agent estimates.
Our bounds are time-nonhomogeneous in the sense that they depend on the initial
starting time. Despite this, we show that agent estimates reach an almost sure
consensus and converge to the same optimal solution of the global optimization
problem with probability one under different assumptions on the local
constraint sets and the stepsize sequence
On distributed convex optimization under inequality and equality constraints via primal-dual subgradient methods
We consider a general multi-agent convex optimization problem where the
agents are to collectively minimize a global objective function subject to a
global inequality constraint, a global equality constraint, and a global
constraint set. The objective function is defined by a sum of local objective
functions, while the global constraint set is produced by the intersection of
local constraint sets. In particular, we study two cases: one where the
equality constraint is absent, and the other where the local constraint sets
are identical. We devise two distributed primal-dual subgradient algorithms
which are based on the characterization of the primal-dual optimal solutions as
the saddle points of the Lagrangian and penalty functions. These algorithms can
be implemented over networks with changing topologies but satisfying a standard
connectivity property, and allow the agents to asymptotically agree on optimal
solutions and optimal values of the optimization problem under the Slater's
condition.Comment: 44 page
Distributed Optimization over Directed Graphs with Row Stochasticity and Constraint Regularity
This paper deals with an optimization problem over a network of agents, where
the cost function is the sum of the individual objectives of the agents and the
constraint set is the intersection of local constraints. Most existing methods
employing subgradient and consensus steps for solving this problem require the
weight matrix associated with the network to be column stochastic or even
doubly stochastic, conditions that can be hard to arrange in directed networks.
Moreover, known convergence analyses for distributed subgradient methods vary
depending on whether the problem is unconstrained or constrained, and whether
the local constraint sets are identical or nonidentical and compact. The main
goals of this paper are: (i) removing the common column stochasticity
requirement; (ii) relaxing the compactness assumption, and (iii) providing a
unified convergence analysis. Specifically, assuming the communication graph to
be fixed and strongly connected and the weight matrix to (only) be row
stochastic, a distributed projected subgradient algorithm and its variation are
presented to solve the problem for cost functions that are convex and Lipschitz
continuous. Based on a regularity assumption on the local constraint sets, a
unified convergence analysis is given that can be applied to both unconstrained
and constrained problems and without assuming compactness of the constraint
sets or an interior point in their intersection. Further, we also establish an
upper bound on the absolute objective error evaluated at each agent's available
local estimate under a nonincreasing step size sequence. This bound allows us
to analyze the convergence rate of both algorithms.Comment: 14 pages, 3 figure
FROST -- Fast row-stochastic optimization with uncoordinated step-sizes
In this paper, we discuss distributed optimization over directed graphs,
where doubly-stochastic weights cannot be constructed. Most of the existing
algorithms overcome this issue by applying push-sum consensus, which utilizes
column-stochastic weights. The formulation of column-stochastic weights
requires each agent to know (at least) its out-degree, which may be impractical
in e.g., broadcast-based communication protocols. In contrast, we describe
FROST (Fast Row-stochastic-Optimization with uncoordinated STep-sizes), an
optimization algorithm applicable to directed graphs that does not require the
knowledge of out-degrees; the implementation of which is straightforward as
each agent locally assigns weights to the incoming information and locally
chooses a suitable step-size. We show that FROST converges linearly to the
optimal solution for smooth and strongly-convex functions given that the
largest step-size is positive and sufficiently small.Comment: Submitted for journal publication, currently under revie
- …