17,081 research outputs found

    Distributed Optimization over Directed Graphs with Row Stochasticity and Constraint Regularity

    Full text link
    This paper deals with an optimization problem over a network of agents, where the cost function is the sum of the individual objectives of the agents and the constraint set is the intersection of local constraints. Most existing methods employing subgradient and consensus steps for solving this problem require the weight matrix associated with the network to be column stochastic or even doubly stochastic, conditions that can be hard to arrange in directed networks. Moreover, known convergence analyses for distributed subgradient methods vary depending on whether the problem is unconstrained or constrained, and whether the local constraint sets are identical or nonidentical and compact. The main goals of this paper are: (i) removing the common column stochasticity requirement; (ii) relaxing the compactness assumption, and (iii) providing a unified convergence analysis. Specifically, assuming the communication graph to be fixed and strongly connected and the weight matrix to (only) be row stochastic, a distributed projected subgradient algorithm and its variation are presented to solve the problem for cost functions that are convex and Lipschitz continuous. Based on a regularity assumption on the local constraint sets, a unified convergence analysis is given that can be applied to both unconstrained and constrained problems and without assuming compactness of the constraint sets or an interior point in their intersection. Further, we also establish an upper bound on the absolute objective error evaluated at each agent's available local estimate under a nonincreasing step size sequence. This bound allows us to analyze the convergence rate of both algorithms.Comment: 14 pages, 3 figure

    Zeroth Order Nonconvex Multi-Agent Optimization over Networks

    Full text link
    In this paper, we consider distributed optimization problems over a multi-agent network, where each agent can only partially evaluate the objective function, and it is allowed to exchange messages with its immediate neighbors. Differently from all existing works on distributed optimization, our focus is given to optimizing a class of non-convex problems, and under the challenging setting where each agent can only access the zeroth-order information (i.e., the functional values) of its local functions. For different types of network topologies such as undirected connected networks or star networks, we develop efficient distributed algorithms and rigorously analyze their convergence and rate of convergence (to the set of stationary solutions). Numerical results are provided to demonstrate the efficiency of the proposed algorithms

    Initialization-free Distributed Algorithms for Optimal Resource Allocation with Feasibility Constraints and its Application to Economic Dispatch of Power Systems

    Full text link
    In this paper, the distributed resource allocation optimization problem is investigated. The allocation decisions are made to minimize the sum of all the agents' local objective functions while satisfying both the global network resource constraint and the local allocation feasibility constraints. Here the data corresponding to each agent in this separable optimization problem, such as the network resources, the local allocation feasibility constraint, and the local objective function, is only accessible to individual agent and cannot be shared with others, which renders new challenges in this distributed optimization problem. Based on either projection or differentiated projection, two classes of continuous-time algorithms are proposed to solve this distributed optimization problem in an initialization-free and scalable manner. Thus, no re-initialization is required even if the operation environment or network configuration is changed, making it possible to achieve a "plug-and-play" optimal operation of networked heterogeneous agents. The algorithm convergence is guaranteed for strictly convex objective functions, and the exponential convergence is proved for strongly convex functions without local constraints. Then the proposed algorithm is applied to the distributed economic dispatch problem in power grids, to demonstrate how it can achieve the global optimum in a scalable way, even when the generation cost, or system load, or network configuration, is changing.Comment: 13 pages, 7 figure

    On the Sublinear Regret of Distributed Primal-Dual Algorithms for Online Constrained Optimization

    Full text link
    This paper introduces consensus-based primal-dual methods for distributed online optimization where the time-varying system objective function ft(x)f_t(\mathbf{x}) is given as the sum of local agents' objective functions, i.e., ft(x)=ifi,t(xi)f_t(\mathbf{x}) = \sum_i f_{i,t}(\mathbf{x}_i), and the system constraint function g(x)\mathbf{g}(\mathbf{x}) is given as the sum of local agents' constraint functions, i.e., g(x)=igi(xi)0\mathbf{g}(\mathbf{x}) = \sum_i \mathbf{g}_i (\mathbf{x}_i) \preceq \mathbf{0}. At each stage, each agent commits to an adaptive decision pertaining only to the past and locally available information, and incurs a new cost function reflecting the change in the environment. Our algorithm uses weighted averaging of the iterates for each agent to keep local estimates of the global constraints and dual variables. We show that the algorithm achieves a regret of order O(T)O(\sqrt{T}) with the time horizon TT, in scenarios when the underlying communication topology is time-varying and jointly-connected. The regret is measured in regard to the cost function value as well as the constraint violation. Numerical results for online routing in wireless multi-hop networks with uncertain channel rates are provided to illustrate the performance of the proposed algorithm

    Distributed Approximate Newton Algorithms and Weight Design for Constrained Optimization

    Full text link
    Motivated by economic dispatch and linearly-constrained resource allocation problems, this paper proposes a class of novel Distributed-Approx Newton algorithms that approximate the standard Newton optimization method. We first develop the notion of an optimal edge weighting for the communication graph over which agents implement the second-order algorithm, and propose a convex approximation for the nonconvex weight design problem. We next build on the optimal weight design to develop a discrete Distributed Approx-Newton algorithm which converges linearly to the optimal solution for economic dispatch problems with unknown cost functions and relaxed local box constraints. For the full box-constrained problem, we develop a continuous Distributed Approx-Newton algorithm which is inspired by first-order saddle-point methods and rigorously prove its convergence to the primal and dual optimizers. A main property of each of these distributed algorithms is that they only require agents to exchange constant-size communication messages, which lends itself to scalable implementations. Simulations demonstrate that the Distributed Approx-Newton algorithms with our weight design have superior convergence properties compared to existing weighting strategies for first-order saddle-point and gradient descent methods.Comment: arXiv admin note: substantial text overlap with arXiv:1703.0786

    Multi-agent constrained optimization of a strongly convex function over time-varying directed networks

    Full text link
    We consider cooperative multi-agent consensus optimization problems over both static and time-varying communication networks, where only local communications are allowed. The objective is to minimize the sum of agent-specific possibly non-smooth composite convex functions over agent-specific private conic constraint sets; hence, the optimal consensus decision should lie in the intersection of these private sets. Assuming the sum function is strongly convex, we provide convergence rates in suboptimality, infeasibility and consensus violation; examine the effect of underlying network topology on the convergence rates of the proposed decentralized algorithms

    A primal-dual method for conic constrained distributed optimization problems

    Full text link
    We consider cooperative multi-agent consensus optimization problems over an undirected network of agents, where only those agents connected by an edge can directly communicate. The objective is to minimize the sum of agent-specific composite convex functions over agent-specific private conic constraint sets; hence, the optimal consensus decision should lie in the intersection of these private sets. We provide convergence rates both in sub-optimality, infeasibility and consensus violation; examine the effect of underlying network topology on the convergence rates of the proposed decentralized algorithms; and show how to extend these methods to handle time-varying communications networks and to solve problems with resource sharing constraints

    Gradient-Free Multi-Agent Nonconvex Nonsmooth Optimization

    Full text link
    In this paper, we consider the problem of minimizing the sum of nonconvex and possibly nonsmooth functions over a connected multi-agent network, where the agents have partial knowledge about the global cost function and can only access the zeroth-order information (i.e., the functional values) of their local cost functions. We propose and analyze a distributed primal-dual gradient-free algorithm for this challenging problem. We show that by appropriately choosing the parameters, the proposed algorithm converges to the set of first order stationary solutions with a provable global sublinear convergence rate. Numerical experiments demonstrate the effectiveness of our proposed method for optimizing nonconvex and nonsmooth problems over a network.Comment: Long version of CDC pape

    Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging

    Full text link
    In this paper, we study distributed big-data nonconvex optimization in multi-agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed

    Distributed Subgradient Projection Algorithm over Directed Graphs: Alternate Proof

    Full text link
    We propose Directed-Distributed Projected Subgradient (D-DPS) to solve a constrained optimization problem over a multi-agent network, where the goal of agents is to collectively minimize the sum of locally known convex functions. Each agent in the network owns only its local objective function, constrained to a commonly known convex set. We focus on the circumstance when communications between agents are described by a \emph{directed} network. The D-DPS combines surplus consensus to overcome the asymmetry caused by the directed communication network. The analysis shows the convergence rate to be O(lnkk)O(\frac{\ln k}{\sqrt{k}}).Comment: Disclaimer: This manuscript provides an alternate approach to prove the results in \textit{C. Xi and U. A. Khan, Distributed Subgradient Projection Algorithm over Directed Graphs, in IEEE Transactions on Automatic Control}. The changes, colored in blue, result into a tighter result in Theorem~1". arXiv admin note: text overlap with arXiv:1602.0065
    corecore