40 research outputs found
A randomized primal distributed algorithm for partitioned and big-data non-convex optimization
In this paper we consider a distributed optimization scenario in which the
aggregate objective function to minimize is partitioned, big-data and possibly
non-convex. Specifically, we focus on a set-up in which the dimension of the
decision variable depends on the network size as well as the number of local
functions, but each local function handled by a node depends only on a (small)
portion of the entire optimization variable. This problem set-up has been shown
to appear in many interesting network application scenarios. As main paper
contribution, we develop a simple, primal distributed algorithm to solve the
optimization problem, based on a randomized descent approach, which works under
asynchronous gossip communication. We prove that the proposed asynchronous
algorithm is a proper, ad-hoc version of a coordinate descent method and thus
converges to a stationary point. To show the effectiveness of the proposed
algorithm, we also present numerical simulations on a non-convex quadratic
program, which confirm the theoretical results
A Duality-Based Approach for Distributed Optimization with Coupling Constraints
In this paper we consider a distributed optimization scenario in which a set
of agents has to solve a convex optimization problem with separable cost
function, local constraint sets and a coupling inequality constraint. We
propose a novel distributed algorithm based on a relaxation of the primal
problem and an elegant exploration of duality theory. Despite its complex
derivation based on several duality steps, the distributed algorithm has a very
simple and intuitive structure. That is, each node solves a local version of
the original problem relaxation, and updates suitable dual variables. We prove
the algorithm correctness and show its effectiveness via numerical
computations
Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition
In this paper we consider a novel partitioned framework for distributed
optimization in peer-to-peer networks. In several important applications the
agents of a network have to solve an optimization problem with two key
features: (i) the dimension of the decision variable depends on the network
size, and (ii) cost function and constraints have a sparsity structure related
to the communication graph. For this class of problems a straightforward
application of existing consensus methods would show two inefficiencies: poor
scalability and redundancy of shared information. We propose an asynchronous
distributed algorithm, based on dual decomposition and coordinate methods, to
solve partitioned optimization problems. We show that, by exploiting the
problem structure, the solution can be partitioned among the nodes, so that
each node just stores a local copy of a portion of the decision variable
(rather than a copy of the entire decision vector) and solves a small-scale
local problem
A Primal Decomposition Method with Suboptimality Bounds for Distributed Mixed-Integer Linear Programming
In this paper we deal with a network of agents seeking to solve in a
distributed way Mixed-Integer Linear Programs (MILPs) with a coupling
constraint (modeling a limited shared resource) and local constraints. MILPs
are NP-hard problems and several challenges arise in a distributed framework,
so that looking for suboptimal solutions is of interest. To achieve this goal,
the presence of a linear coupling calls for tailored decomposition approaches.
We propose a fully distributed algorithm based on a primal decomposition
approach and a suitable tightening of the coupling constraints. Agents
repeatedly update local allocation vectors, which converge to an optimal
resource allocation of an approximate version of the original problem. Based on
such allocation vectors, agents are able to (locally) compute a mixed-integer
solution, which is guaranteed to be feasible after a sufficiently large time.
Asymptotic and finite-time suboptimality bounds are established for the
computed solution. Numerical simulations highlight the efficacy of the proposed
methodology.Comment: 57th IEEE Conference on Decision and Contro
A duality-based approach for distributed min-max optimization with application to demand side management
In this paper we consider a distributed optimization scenario in which a set
of processors aims at minimizing the maximum of a collection of "separable
convex functions" subject to local constraints. This set-up is motivated by
peak-demand minimization problems in smart grids. Here, the goal is to minimize
the peak value over a finite horizon with: (i) the demand at each time instant
being the sum of contributions from different devices, and (ii) the local
states at different time instants being coupled through local dynamics. The
min-max structure and the double coupling (through the devices and over the
time horizon) makes this problem challenging in a distributed set-up (e.g.,
well-known distributed dual decomposition approaches cannot be applied). We
propose a distributed algorithm based on the combination of duality methods and
properties from min-max optimization. Specifically, we derive a series of
equivalent problems by introducing ad-hoc slack variables and by going back and
forth from primal and dual formulations. On the resulting problem we apply a
dual subgradient method, which turns out to be a distributed algorithm. We
prove the correctness of the proposed algorithm and show its effectiveness via
numerical computations.Comment: arXiv admin note: substantial text overlap with arXiv:1611.0916
Distributed Big-Data Optimization via Block Communications
We study distributed multi-agent large-scale optimization problems, wherein
the cost function is composed of a smooth possibly nonconvex sum-utility plus a
DC (Difference-of-Convex) regularizer. We consider the scenario where the
dimension of the optimization variables is so large that optimizing and/or
transmitting the entire set of variables could cause unaffordable computation
and communication overhead. To address this issue, we propose the first
distributed algorithm whereby agents optimize and communicate only a portion of
their local variables. The scheme hinges on successive convex approximation
(SCA) to handle the nonconvexity of the objective function, coupled with a
novel block-signal tracking scheme, aiming at locally estimating the average of
the agents' gradients. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Numerical results on a sparse regression
problem show the effectiveness of the proposed algorithm and the impact of the
block size on its practical convergence speed and communication cost
Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
In this paper we consider distributed optimization problems in which the cost function is separable, i.e., a sum of possibly non-smooth functions all sharing a common variable, and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose a class of distributed optimization algorithms based on proximal gradient methods applied to the dual problem. We show that, by choosing suitable primal variable copies, the dual problem is itself separable when written in terms of conjugate functions, and the dual variables can be stacked into non-overlapping blocks associated to the computing nodes. We first show that a weighted proximal gradient on the dual function leads to a synchronous distributed algorithm with local dual proximal gradient updates at each node. Then, as main paper contribution, we develop asynchronous versions of the algorithm in which the node updates are triggered by local timers without any global iteration counter. The algorithms are shown to be proper randomized block-coordinate proximal gradient updates on the dual function
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
A randomized primal distributed algorithm for partitioned and big-data non-convex optimization
In this paper we consider a distributed opti- mization scenario in which the aggregate objective function to minimize is partitioned, big-data and possibly non-convex. Specifically, we focus on a set-up in which the dimension of the decision variable depends on the network size as well as the number of local functions, but each local function handled by a node depends only on a (small) portion of the entire optimiza- tion variable. This problem set-up has been shown to appear in many interesting network application scenarios. As main paper contribution, we develop a simple, primal distributed algorithm to solve the optimization problem, based on a randomized descent approach, which works under asynchronous gossip communication. We prove that the proposed asynchronous algorithm is a proper, ad-hoc version of a coordinate descent method and thus converges to a stationary point. To show the effectiveness of the proposed algorithm, we also present numerical simulations on a non-convex quadratic program, which confirm the theoretical results