1,698 research outputs found

    Distributed Basis Pursuit

    Full text link
    We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix A at any time. We consider two scenarios in which either the columns or the rows of A are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms.Comment: Preprint of the journal version of the paper; IEEE Transactions on Signal Processing, Vol. 60, Issue 4, April, 201

    Privacy-Preserving Decentralized Optimization and Event Localization

    Get PDF
    This dissertation considers decentralized optimization and its applications. On the one hand, we address privacy preservation for decentralized optimization, where N agents cooperatively minimize the sum of N convex functions private to these individual agents. In most existing decentralized optimization approaches, participating agents exchange and disclose states explicitly, which may not be desirable when the states contain sensitive information of individual agents. The problem is more acute when adversaries exist which try to steal information from other participating agents. To address this issue, we first propose two privacy-preserving decentralized optimization approaches based on ADMM (alternating direction method of multipliers) and subgradient method, respectively, by leveraging partially homomorphic cryptography. To our knowledge, this is the first time that cryptographic techniques are incorporated in a fully decentralized setting to enable privacy preservation in decentralized optimization in the absence of any third party or aggregator. To facilitate the incorporation of encryption in a fully decentralized manner, we also introduce a new ADMM which allows time-varying penalty matrices and rigorously prove that it has a convergence rate of O(1/t). However, given that encryption-based algorithms unavoidably bring about extra computational and communication overhead in real-time optimization [61], we then propose another novel privacy solution for decentralized optimization based on function decomposition and ADMM which enables privacy without incurring large communication/computational overhead. On the other hand, we address the application of decentralized optimization to the event localization problem, which plays a fundamental role in many wireless sensor network applications such as environmental monitoring, homeland security, medical treatment, and health care. The event localization problem is essentially a non-convex and non-smooth problem. We address such a problem in two ways. First, a completely decentralized solution based on augmented Lagrangian methods and ADMM is proposed to solve the non-smooth and non-convex problem directly, rather than using conventional convex relaxation techniques. However, this algorithm requires the target event to be within the convex hull of the deployed sensors. To address this issue, we propose another two scalable distributed algorithms based on ADMM and convex relaxation, which do not require the target event to be within the convex hull of the deployed sensors. Simulation results confirm effectiveness of the proposed algorithms

    A Distributed Asynchronous Method of Multipliers for Constrained Nonconvex Optimization

    Get PDF
    This paper presents a fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the method of multipliers, each node performs, when it wakes up, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. This allows one to extend the properties of the centralized method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of a neural network.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0648
    corecore