144,395 research outputs found

    Distributed Optimization via Local Computation Algorithms

    Get PDF
    We propose a new approach for distributed optimization based on an emerging area of theoretical computer science -- local computation algorithms. The approach is fundamentally different from existing methodologies and provides a number of benefits, such as robustness to link failure and adaptivity in dynamic settings. Specifically, we develop an algorithm, LOCO, that given a convex optimization problem P with n variables and a "sparse" linear constraint matrix with m constraints, provably finds a solution as good as that of the best online algorithm for P using only O(log(n+m)) messages with high probability. The approach is not iterative and communication is restricted to a localized neighborhood. In addition to analytic results, we show numerically that the performance improvements over classical approaches for distributed optimization are significant, e.g., it uses orders of magnitude less communication than ADMM

    Distributed Optimization via Local Computation Algorithms

    Get PDF
    We propose a new approach for distributed optimization based on an emerging area of theoretical computer science -- local computation algorithms. The approach is fundamentally different from existing methodologies and provides a number of benefits, such as robustness to link failure and adaptivity in dynamic settings. Specifically, we develop an algorithm, LOCO, that given a convex optimization problem P with n variables and a "sparse" linear constraint matrix with m constraints, provably finds a solution as good as that of the best online algorithm for P using only O(log(n+m)) messages with high probability. The approach is not iterative and communication is restricted to a localized neighborhood. In addition to analytic results, we show numerically that the performance improvements over classical approaches for distributed optimization are significant, e.g., it uses orders of magnitude less communication than ADMM

    Logarithmic Communication for Distributed Optimization in Multi-Agent Systems

    Get PDF
    Classically, the design of multi-agent systems is approached using techniques from distributed optimization such as dual descent and consensus algorithms. Such algorithms depend on convergence to global consensus before any individual agent can determine its local action. This leads to challenges with respect to communication overhead and robustness, and improving algorithms with respect to these measures has been a focus of the community for decades. This paper presents a new approach for multi-agent system design based on ideas from the emerging field of local computation algorithms. The framework we develop, LOcal Convex Optimization (LOCO), is the first local computation algorithm for convex optimization problems and can be applied in a wide-variety of settings. We demonstrate the generality of the framework via applications to Network Utility Maximization (NUM) and the distributed training of Support Vector Machines (SVMs), providing numerical results illustrating the improvement compared to classical distributed optimization approaches in each case

    Accuracy-aware privacy mechanisms for distributed computation

    Get PDF
    Distributed computing systems involve a network of devices or agents that use locally stored private information to solve a common problem. Distributed algorithms fundamentally require communication between devices leaving the system vulnerable to "privacy attacks" perpetrated by adversarial agents. In this dissertation, we focus on designing privacy-preserving distributed algorithms for -- (a) solving distributed optimization problems, (b) computing equilibrium of network aggregate games, and (c) solving a distributed system of linear equations. Specifically, we propose a privacy definition for distributed computation "non-identifiability", that allow us to simultaneously guarantee privacy and the accuracy of the computed solution. This definition involves showing that information observed by the adversary is compatible with several distributed computing problems and the associated ambiguity provides privacy. Distributed Optimization: We propose the Function Sharing strategy that involves using correlated random functions to obfuscate private objective functions followed by using a standard distributed optimization algorithm. We characterize a tight graph connectivity condition for proving privacy via non-identifiability of local objective functions. We also prove correctness of our algorithm and show that we can achieve privacy and accuracy simultaneously. Network Aggregate Games: We design a distributed Nash equilibrium computation algorithm for network aggregate games. Our algorithm uses locally balanced correlated random perturbations to hide information shared with neighbors for aggregate estimation. This step is followed by descent along the negative gradient of the local cost function. We show that if the graph of non-adversarial agents is connected and non-bipartite, then our algorithm keeps private local cost information non-identifiable while asymptotically converging to the accurate Nash equilibrium. Average Consensus and System of Linear Equations: Finally, we design a finite-time algorithm for solving the average consensus problem over directed graphs with information-theoretic privacy. We use this algorithm to solve a distributed system of linear equations in finite-time while protecting the privacy of local equations. We characterize computation, communication, memory and iteration cost of our algorithm and characterize graph conditions for guaranteeing information-theoretic privacy of local data

    Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition

    Full text link
    In this paper we consider a novel partitioned framework for distributed optimization in peer-to-peer networks. In several important applications the agents of a network have to solve an optimization problem with two key features: (i) the dimension of the decision variable depends on the network size, and (ii) cost function and constraints have a sparsity structure related to the communication graph. For this class of problems a straightforward application of existing consensus methods would show two inefficiencies: poor scalability and redundancy of shared information. We propose an asynchronous distributed algorithm, based on dual decomposition and coordinate methods, to solve partitioned optimization problems. We show that, by exploiting the problem structure, the solution can be partitioned among the nodes, so that each node just stores a local copy of a portion of the decision variable (rather than a copy of the entire decision vector) and solves a small-scale local problem
    • …
    corecore