2,796 research outputs found
A randomized primal distributed algorithm for partitioned and big-data non-convex optimization
In this paper we consider a distributed optimization scenario in which the
aggregate objective function to minimize is partitioned, big-data and possibly
non-convex. Specifically, we focus on a set-up in which the dimension of the
decision variable depends on the network size as well as the number of local
functions, but each local function handled by a node depends only on a (small)
portion of the entire optimization variable. This problem set-up has been shown
to appear in many interesting network application scenarios. As main paper
contribution, we develop a simple, primal distributed algorithm to solve the
optimization problem, based on a randomized descent approach, which works under
asynchronous gossip communication. We prove that the proposed asynchronous
algorithm is a proper, ad-hoc version of a coordinate descent method and thus
converges to a stationary point. To show the effectiveness of the proposed
algorithm, we also present numerical simulations on a non-convex quadratic
program, which confirm the theoretical results
A Distributed Asynchronous Method of Multipliers for Constrained Nonconvex Optimization
This paper presents a fully asynchronous and distributed approach for
tackling optimization problems in which both the objective function and the
constraints may be nonconvex. In the considered network setting each node is
active upon triggering of a local timer and has access only to a portion of the
objective function and to a subset of the constraints. In the proposed
technique, based on the method of multipliers, each node performs, when it
wakes up, either a descent step on a local augmented Lagrangian or an ascent
step on the local multiplier vector. Nodes realize when to switch from the
descent step to the ascent one through an asynchronous distributed logic-AND,
which detects when all the nodes have reached a predefined tolerance in the
minimization of the augmented Lagrangian. It is shown that the resulting
distributed algorithm is equivalent to a block coordinate descent for the
minimization of the global augmented Lagrangian. This allows one to extend the
properties of the centralized method of multipliers to the considered
distributed framework. Two application examples are presented to validate the
proposed approach: a distributed source localization problem and the parameter
estimation of a neural network.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0648
Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
In this paper we consider distributed optimization problems in which the cost function is separable, i.e., a sum of possibly non-smooth functions all sharing a common variable, and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose a class of distributed optimization algorithms based on proximal gradient methods applied to the dual problem. We show that, by choosing suitable primal variable copies, the dual problem is itself separable when written in terms of conjugate functions, and the dual variables can be stacked into non-overlapping blocks associated to the computing nodes. We first show that a weighted proximal gradient on the dual function leads to a synchronous distributed algorithm with local dual proximal gradient updates at each node. Then, as main paper contribution, we develop asynchronous versions of the algorithm in which the node updates are triggered by local timers without any global iteration counter. The algorithms are shown to be proper randomized block-coordinate proximal gradient updates on the dual function
A randomized primal distributed algorithm for partitioned and big-data non-convex optimization
In this paper we consider a distributed opti- mization scenario in which the aggregate objective function to minimize is partitioned, big-data and possibly non-convex. Specifically, we focus on a set-up in which the dimension of the decision variable depends on the network size as well as the number of local functions, but each local function handled by a node depends only on a (small) portion of the entire optimiza- tion variable. This problem set-up has been shown to appear in many interesting network application scenarios. As main paper contribution, we develop a simple, primal distributed algorithm to solve the optimization problem, based on a randomized descent approach, which works under asynchronous gossip communication. We prove that the proposed asynchronous algorithm is a proper, ad-hoc version of a coordinate descent method and thus converges to a stationary point. To show the effectiveness of the proposed algorithm, we also present numerical simulations on a non-convex quadratic program, which confirm the theoretical results
On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
In this paper we propose a distributed dual gradient algorithm for minimizing
linearly constrained separable convex problems and analyze its rate of
convergence. In particular, we prove that under the assumption of strong
convexity and Lipshitz continuity of the gradient of the primal objective
function we have a global error bound type property for the dual problem. Using
this error bound property we devise a fully distributed dual gradient scheme,
i.e. a gradient scheme based on a weighted step size, for which we derive
global linear rate of convergence for both dual and primal suboptimality and
for primal feasibility violation. Many real applications, e.g. distributed
model predictive control, network utility maximization or optimal power flow,
can be posed as linearly constrained separable convex problems for which dual
gradient type methods from literature have sublinear convergence rate. In the
present paper we prove for the first time that in fact we can achieve linear
convergence rate for such algorithms when they are used for solving these
applications. Numerical simulations are also provided to confirm our theory.Comment: 14 pages, 4 figures, submitted to Automatica Journal, February 2014.
arXiv admin note: substantial text overlap with arXiv:1401.4398. We revised
the paper, adding more simulations and checking for typo
Randomized dual proximal gradient for large-scale distributed optimization
In this paper we consider distributed optimization problems in which the cost
function is separable (i.e., a sum of possibly non-smooth functions all
sharing a common variable) and can be split into a strongly convex term and a
convex one. The second term is typically used to encode constraints or to
regularize the solution. We propose an asynchronous, distributed optimization
algorithm over an undirected topology, based on a proximal gradient update on
the dual problem. We show that by means of a proper choice of primal
variables, the dual problem is separable and the dual variables can be stacked
into separate blocks. This allows us to show that a distributed
gossip update can be obtained by means of a randomized block-coordinate
proximal gradient on the dual function
Uncertain Multi-Agent Systems with Distributed Constrained Optimization Missions and Event-Triggered Communications: Application to Resource Allocation
This paper deals with solving distributed optimization problems with equality
constraints by a class of uncertain nonlinear heterogeneous dynamic multi-agent
systems. It is assumed that each agent with an uncertain dynamic model has
limited information about the main problem and limited access to the
information of the state variables of the other agents. A distributed algorithm
that guarantees cooperative solving of the constrained optimization problem by
the agents is proposed. Via this algorithm, the agents do not need to
continuously broadcast their data. It is shown that the proposed algorithm can
be useful in solving resource allocation problems
Distributed event-triggered aggregative optimization with applications to price-based energy management
This paper studies a distributed continuous-time aggregative optimization
problem, which is a fundamental problem in the price-based energy management.
The objective of the distributed aggregative optimization is to minimize the
sum of local objective functions, which have a specific expression that relies
on agents' own decisions and the aggregation of all agents' decisions. To solve
the problem, a novel distributed continuous-time algorithm is proposed by
combining gradient dynamics with a dynamic average consensus estimator in a
two-time scale. The exponential convergence of the proposed algorithm is
established under the assumption of a convex global cost function by virtue of
the stability theory of singular perturbation systems. Motivated by practical
applications, the implementation of the continuous-time algorithm with
event-triggered communication is investigated. Simulations on the price-based
energy management of distributed energy resources are given to illustrate the
proposed method.Comment: 7 pages,7 figure
- …