969 research outputs found

    Distributed Subgradient Projection Algorithm over Directed Graphs

    Full text link
    We propose a distributed algorithm, termed the Directed-Distributed Projected Subgradient (D-DPS), to solve a constrained optimization problem over a multi-agent network, where the goal of agents is to collectively minimize the sum of locally known convex functions. Each agent in the network owns only its local objective function, constrained to a commonly known convex set. We focus on the circumstance when communications between agents are described by a directed network. The D-DPS augments an additional variable for each agent, to overcome the asymmetry caused by the directed communication network. The convergence analysis shows that D-DPS converges at a rate of O(ln⁑kk)O(\frac{\ln k}{\sqrt{k}}), where k is the number of iterations

    Distributed Subgradient Projection Algorithm over Directed Graphs: Alternate Proof

    Full text link
    We propose Directed-Distributed Projected Subgradient (D-DPS) to solve a constrained optimization problem over a multi-agent network, where the goal of agents is to collectively minimize the sum of locally known convex functions. Each agent in the network owns only its local objective function, constrained to a commonly known convex set. We focus on the circumstance when communications between agents are described by a \emph{directed} network. The D-DPS combines surplus consensus to overcome the asymmetry caused by the directed communication network. The analysis shows the convergence rate to be O(ln⁑kk)O(\frac{\ln k}{\sqrt{k}}).Comment: Disclaimer: This manuscript provides an alternate approach to prove the results in \textit{C. Xi and U. A. Khan, Distributed Subgradient Projection Algorithm over Directed Graphs, in IEEE Transactions on Automatic Control}. The changes, colored in blue, result into a tighter result in Theorem~1". arXiv admin note: text overlap with arXiv:1602.0065

    Distributed Optimization over Directed Graphs with Row Stochasticity and Constraint Regularity

    Full text link
    This paper deals with an optimization problem over a network of agents, where the cost function is the sum of the individual objectives of the agents and the constraint set is the intersection of local constraints. Most existing methods employing subgradient and consensus steps for solving this problem require the weight matrix associated with the network to be column stochastic or even doubly stochastic, conditions that can be hard to arrange in directed networks. Moreover, known convergence analyses for distributed subgradient methods vary depending on whether the problem is unconstrained or constrained, and whether the local constraint sets are identical or nonidentical and compact. The main goals of this paper are: (i) removing the common column stochasticity requirement; (ii) relaxing the compactness assumption, and (iii) providing a unified convergence analysis. Specifically, assuming the communication graph to be fixed and strongly connected and the weight matrix to (only) be row stochastic, a distributed projected subgradient algorithm and its variation are presented to solve the problem for cost functions that are convex and Lipschitz continuous. Based on a regularity assumption on the local constraint sets, a unified convergence analysis is given that can be applied to both unconstrained and constrained problems and without assuming compactness of the constraint sets or an interior point in their intersection. Further, we also establish an upper bound on the absolute objective error evaluated at each agent's available local estimate under a nonincreasing step size sequence. This bound allows us to analyze the convergence rate of both algorithms.Comment: 14 pages, 3 figure

    Distributed Autonomous Online Learning: Regrets and Intrinsic Privacy-Preserving Properties

    Full text link
    Online learning has become increasingly popular on handling massive data. The sequential nature of online learning, however, requires a centralized learner to store data and update parameters. In this paper, we consider online learning with {\em distributed} data sources. The autonomous learners update local parameters based on local data sources and periodically exchange information with a small subset of neighbors in a communication network. We derive the regret bound for strongly convex functions that generalizes the work by Ram et al. (2010) for convex functions. Most importantly, we show that our algorithm has \emph{intrinsic} privacy-preserving properties, and we prove the sufficient and necessary conditions for privacy preservation in the network. These conditions imply that for networks with greater-than-one connectivity, a malicious learner cannot reconstruct the subgradients (and sensitive raw data) of other learners, which makes our algorithm appealing in privacy sensitive applications.Comment: 25 pages, 2 figure

    Approximate Projection Methods for Decentralized Optimization with Functional Constraints

    Full text link
    We consider distributed convex optimization problems that involve a separable objective function and nontrivial functional constraints, such as Linear Matrix Inequalities (LMIs). We propose a decentralized and computationally inexpensive algorithm which is based on the concept of approximate projections. Our algorithm is one of the consensus based methods in that, at every iteration, each agent performs a consensus update of its decision variables followed by an optimization step of its local objective function and local constraints. Unlike other methods, the last step of our method is not an Euclidean projection onto the feasible set, but instead a subgradient step in the direction that minimizes the local constraint violation. We propose two different averaging schemes to mitigate the disagreements among the agents' local estimates. We show that the algorithms converge almost surely, i.e., every agent agrees on the same optimal solution, under the assumption that the objective functions and constraint functions are nondifferentiable and their subgradients are bounded. We provide simulation results on a decentralized optimal gossip averaging problem, which involves SDP constraints, to complement our theoretical results

    Distributed Subgradient-based Multi-agent Optimization with More General Step Sizes

    Full text link
    A wider selection of step sizes is explored for the distributed subgradient algorithm for multi-agent optimization problems, for both time-invariant and time-varying communication topologies. The square summable requirement of the step sizes commonly adopted in the literature is removed. The step sizes are only required to be positive, vanishing and non-summable. It is proved that in both unconstrained and constrained optimization problems, the agents' estimates reach consensus and converge to the optimal solution with the more general choice of step sizes. The idea is to show that a weighted average of the agents' estimates approaches the optimal solution, but with different approaches. In the unconstrained case, the optimal convergence of the weighted average of the agents' estimates is proved by analyzing the distance change from the weighted average to the optimal solution and showing that the weighted average is arbitrarily close to the optimal solution. In the constrained case, this is achieved by analyzing the distance change from the agents' estimates to the optimal solution and utilizing the boundedness of the constraints. Then the optimal convergence of the agents' estimates follows because consensus is reached in both cases. These results are valid for both a strongly connected time-invariant graph and time-varying balanced graphs that are jointly strongly connected

    Fenchel Dual Gradient Methods for Distributed Convex Optimization over Time-varying Networks

    Full text link
    In the large collection of existing distributed algorithms for convex multi-agent optimization, only a handful of them provide convergence rate guarantees on agent networks with time-varying topologies, which, however, restrict the problem to be unconstrained. Motivated by this, we develop a family of distributed Fenchel dual gradient methods for solving constrained, strongly convex but not necessarily smooth multi-agent optimization problems over time-varying undirected networks. The proposed algorithms are constructed based on the application of weighted gradient methods to the Fenchel dual of the multi-agent optimization problem, and can be implemented in a fully decentralized fashion. We show that the proposed algorithms drive all the agents to both primal and dual optimality asymptotically under a minimal connectivity condition and at sublinear rates under a standard connectivity condition. Finally, the competent convergence performance of the distributed Fenchel dual gradient methods is demonstrated via simulations

    Privacy Preservation in Distributed Subgradient Optimization Algorithms

    Full text link
    Privacy preservation is becoming an increasingly important issue in data mining and machine learning. In this paper, we consider the privacy preserving features of distributed subgradient optimization algorithms. We first show that a well-known distributed subgradient synchronous optimization algorithm, in which all agents make their optimization updates simultaneously at all times, is not privacy preserving in the sense that the malicious agent can learn other agents' subgradients asymptotically. Then we propose a distributed subgradient projection asynchronous optimization algorithm without relying on any existing privacy preservation technique, where agents can exchange data between neighbors directly. In contrast to synchronous algorithms, in the new asynchronous algorithm agents make their optimization updates asynchronously. The introduced projection operation and asynchronous optimization mechanism can guarantee that the proposed asynchronous optimization algorithm is privacy preserving. Moreover, we also establish the optimal convergence of the newly proposed algorithm. The proposed privacy preservation techniques shed light on developing other privacy preserving distributed optimization algorithms

    Online Distributed Optimization on Dynamic Networks

    Full text link
    This paper presents a distributed optimization scheme over a network of agents in the presence of cost uncertainties and over switching communication topologies. Inspired by recent advances in distributed convex optimization, we propose a distributed algorithm based on a dual sub-gradient averaging. The objective of this algorithm is to minimize a cost function cooperatively. Furthermore, the algorithm changes the weights on the communication links in the network to adapt to varying reliability of neighboring agents. A convergence rate analysis as a function of the underlying network topology is then presented, followed by simulation results for representative classes of sensor networks.Comment: Submitted to The IEEE Transactions on Automatic Control, 201

    Graph Balancing for Distributed Subgradient Methods over Directed Graphs

    Full text link
    We consider a multi agent optimization problem where a set of agents collectively solves a global optimization problem with the objective function given by the sum of locally known convex functions. We focus on the case when information exchange among agents takes place over a directed network and propose a distributed subgradient algorithm in which each agent performs local processing based on information obtained from his incoming neighbors. Our algorithm uses weight balancing to overcome the asymmetries caused by the directed communication network, i.e., agents scale their outgoing information with dynamically updated weights that converge to balancing weights of the graph. We show that both the objective function values and the consensus violation, at the ergodic average of the estimates generated by the algorithm, converge with rate O(log⁑TT)O(\frac{\log T}{\sqrt{T}}), where TT is the number of iterations. A special case of our algorithm provides a new distributed method to compute average consensus over directed graphs
    • …
    corecore