1,010 research outputs found

    Distributed Multi-Agent Optimization with State-Dependent Communication

    Get PDF
    We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. In this paper, we study a projected multi-agent subgradient algorithm under state-dependent communication. The algorithm involves each agent performing a local averaging to combine his estimate with the other agents' estimates, taking a subgradient step along his local objective function, and projecting the estimates on his local constraint set. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a "disagreement metric" between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence

    Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization

    Get PDF
    Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel subgradient calculations and a global subgradient update performed by a master node. In this paper, we propose a consensus-based dual decomposition to remove the need for such a master node and still enable the computing nodes to generate an approximate dual solution for the underlying convex optimization problem. In addition, we provide a primal recovery mechanism to allow the nodes to have access to approximate near-optimal primal solutions. Our scheme is based on a constant stepsize choice and the dual and primal objective convergence are achieved up to a bounded error floor dependent on the stepsize and on the number of consensus steps among the nodes

    A Distributed Newton Method for Network Utility Maximization

    Full text link
    Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newton-type fast converging algorithm for solving network utility maximization problems with self-concordant utility functions. By using novel matrix splitting techniques, both primal and dual updates for the Newton step can be computed using iterative schemes in a decentralized manner with limited information exchange. Similarly, the stepsize can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the stepsize in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition.Comment: 27 pages, 4 figures, LIDS report, submitted to CDC 201

    An Online Parallel and Distributed Algorithm for Recursive Estimation of Sparse Signals

    Full text link
    In this paper, we consider a recursive estimation problem for linear regression where the signal to be estimated admits a sparse representation and measurement samples are only sequentially available. We propose a convergent parallel estimation scheme that consists in solving a sequence of â„“1\ell_{1}-regularized least-square problems approximately. The proposed scheme is novel in three aspects: i) all elements of the unknown vector variable are updated in parallel at each time instance, and convergence speed is much faster than state-of-the-art schemes which update the elements sequentially; ii) both the update direction and stepsize of each element have simple closed-form expressions, so the algorithm is suitable for online (real-time) implementation; and iii) the stepsize is designed to accelerate the convergence but it does not suffer from the common trouble of parameter tuning in literature. Both centralized and distributed implementation schemes are discussed. The attractive features of the proposed algorithm are also numerically consolidated.Comment: Part of this work has been presented at The Asilomar Conference on Signals, Systems, and Computers, Nov. 201
    • …
    corecore