40 research outputs found

    A randomized primal distributed algorithm for partitioned and big-data non-convex optimization

    Full text link
    In this paper we consider a distributed optimization scenario in which the aggregate objective function to minimize is partitioned, big-data and possibly non-convex. Specifically, we focus on a set-up in which the dimension of the decision variable depends on the network size as well as the number of local functions, but each local function handled by a node depends only on a (small) portion of the entire optimization variable. This problem set-up has been shown to appear in many interesting network application scenarios. As main paper contribution, we develop a simple, primal distributed algorithm to solve the optimization problem, based on a randomized descent approach, which works under asynchronous gossip communication. We prove that the proposed asynchronous algorithm is a proper, ad-hoc version of a coordinate descent method and thus converges to a stationary point. To show the effectiveness of the proposed algorithm, we also present numerical simulations on a non-convex quadratic program, which confirm the theoretical results

    A Duality-Based Approach for Distributed Optimization with Coupling Constraints

    Full text link
    In this paper we consider a distributed optimization scenario in which a set of agents has to solve a convex optimization problem with separable cost function, local constraint sets and a coupling inequality constraint. We propose a novel distributed algorithm based on a relaxation of the primal problem and an elegant exploration of duality theory. Despite its complex derivation based on several duality steps, the distributed algorithm has a very simple and intuitive structure. That is, each node solves a local version of the original problem relaxation, and updates suitable dual variables. We prove the algorithm correctness and show its effectiveness via numerical computations

    Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition

    Full text link
    In this paper we consider a novel partitioned framework for distributed optimization in peer-to-peer networks. In several important applications the agents of a network have to solve an optimization problem with two key features: (i) the dimension of the decision variable depends on the network size, and (ii) cost function and constraints have a sparsity structure related to the communication graph. For this class of problems a straightforward application of existing consensus methods would show two inefficiencies: poor scalability and redundancy of shared information. We propose an asynchronous distributed algorithm, based on dual decomposition and coordinate methods, to solve partitioned optimization problems. We show that, by exploiting the problem structure, the solution can be partitioned among the nodes, so that each node just stores a local copy of a portion of the decision variable (rather than a copy of the entire decision vector) and solves a small-scale local problem

    A Primal Decomposition Method with Suboptimality Bounds for Distributed Mixed-Integer Linear Programming

    Full text link
    In this paper we deal with a network of agents seeking to solve in a distributed way Mixed-Integer Linear Programs (MILPs) with a coupling constraint (modeling a limited shared resource) and local constraints. MILPs are NP-hard problems and several challenges arise in a distributed framework, so that looking for suboptimal solutions is of interest. To achieve this goal, the presence of a linear coupling calls for tailored decomposition approaches. We propose a fully distributed algorithm based on a primal decomposition approach and a suitable tightening of the coupling constraints. Agents repeatedly update local allocation vectors, which converge to an optimal resource allocation of an approximate version of the original problem. Based on such allocation vectors, agents are able to (locally) compute a mixed-integer solution, which is guaranteed to be feasible after a sufficiently large time. Asymptotic and finite-time suboptimality bounds are established for the computed solution. Numerical simulations highlight the efficacy of the proposed methodology.Comment: 57th IEEE Conference on Decision and Contro

    A duality-based approach for distributed min-max optimization with application to demand side management

    Full text link
    In this paper we consider a distributed optimization scenario in which a set of processors aims at minimizing the maximum of a collection of "separable convex functions" subject to local constraints. This set-up is motivated by peak-demand minimization problems in smart grids. Here, the goal is to minimize the peak value over a finite horizon with: (i) the demand at each time instant being the sum of contributions from different devices, and (ii) the local states at different time instants being coupled through local dynamics. The min-max structure and the double coupling (through the devices and over the time horizon) makes this problem challenging in a distributed set-up (e.g., well-known distributed dual decomposition approaches cannot be applied). We propose a distributed algorithm based on the combination of duality methods and properties from min-max optimization. Specifically, we derive a series of equivalent problems by introducing ad-hoc slack variables and by going back and forth from primal and dual formulations. On the resulting problem we apply a dual subgradient method, which turns out to be a distributed algorithm. We prove the correctness of the proposed algorithm and show its effectiveness via numerical computations.Comment: arXiv admin note: substantial text overlap with arXiv:1611.0916

    Distributed Big-Data Optimization via Block Communications

    Get PDF
    We study distributed multi-agent large-scale optimization problems, wherein the cost function is composed of a smooth possibly nonconvex sum-utility plus a DC (Difference-of-Convex) regularizer. We consider the scenario where the dimension of the optimization variables is so large that optimizing and/or transmitting the entire set of variables could cause unaffordable computation and communication overhead. To address this issue, we propose the first distributed algorithm whereby agents optimize and communicate only a portion of their local variables. The scheme hinges on successive convex approximation (SCA) to handle the nonconvexity of the objective function, coupled with a novel block-signal tracking scheme, aiming at locally estimating the average of the agents' gradients. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Numerical results on a sparse regression problem show the effectiveness of the proposed algorithm and the impact of the block size on its practical convergence speed and communication cost

    Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient

    Get PDF
    In this paper we consider distributed optimization problems in which the cost function is separable, i.e., a sum of possibly non-smooth functions all sharing a common variable, and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose a class of distributed optimization algorithms based on proximal gradient methods applied to the dual problem. We show that, by choosing suitable primal variable copies, the dual problem is itself separable when written in terms of conjugate functions, and the dual variables can be stacked into non-overlapping blocks associated to the computing nodes. We first show that a weighted proximal gradient on the dual function leads to a synchronous distributed algorithm with local dual proximal gradient updates at each node. Then, as main paper contribution, we develop asynchronous versions of the algorithm in which the node updates are triggered by local timers without any global iteration counter. The algorithms are shown to be proper randomized block-coordinate proximal gradient updates on the dual function

    A randomized primal distributed algorithm for partitioned and big-data non-convex optimization

    Get PDF
    In this paper we consider a distributed opti- mization scenario in which the aggregate objective function to minimize is partitioned, big-data and possibly non-convex. Specifically, we focus on a set-up in which the dimension of the decision variable depends on the network size as well as the number of local functions, but each local function handled by a node depends only on a (small) portion of the entire optimiza- tion variable. This problem set-up has been shown to appear in many interesting network application scenarios. As main paper contribution, we develop a simple, primal distributed algorithm to solve the optimization problem, based on a randomized descent approach, which works under asynchronous gossip communication. We prove that the proposed asynchronous algorithm is a proper, ad-hoc version of a coordinate descent method and thus converges to a stationary point. To show the effectiveness of the proposed algorithm, we also present numerical simulations on a non-convex quadratic program, which confirm the theoretical results

    Randomized dual proximal gradient for large-scale distributed optimization

    Get PDF
    In this paper we consider distributed optimization problems in which the cost function is separable (i.e., a sum of possibly non-smooth functions all sharing a common variable) and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose an asynchronous, distributed optimization algorithm over an undirected topology, based on a proximal gradient update on the dual problem. We show that by means of a proper choice of primal variables, the dual problem is separable and the dual variables can be stacked into separate blocks. This allows us to show that a distributed gossip update can be obtained by means of a randomized block-coordinate proximal gradient on the dual function
    corecore