1,701 research outputs found

    On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems

    Full text link
    In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we prove that under the assumption of strong convexity and Lipshitz continuity of the gradient of the primal objective function we have a global error bound type property for the dual problem. Using this error bound property we devise a fully distributed dual gradient scheme, i.e. a gradient scheme based on a weighted step size, for which we derive global linear rate of convergence for both dual and primal suboptimality and for primal feasibility violation. Many real applications, e.g. distributed model predictive control, network utility maximization or optimal power flow, can be posed as linearly constrained separable convex problems for which dual gradient type methods from literature have sublinear convergence rate. In the present paper we prove for the first time that in fact we can achieve linear convergence rate for such algorithms when they are used for solving these applications. Numerical simulations are also provided to confirm our theory.Comment: 14 pages, 4 figures, submitted to Automatica Journal, February 2014. arXiv admin note: substantial text overlap with arXiv:1401.4398. We revised the paper, adding more simulations and checking for typo

    Distributed Optimization: Convergence Conditions from a Dynamical System Perspective

    Full text link
    This paper explores the fundamental properties of distributed minimization of a sum of functions with each function only known to one node, and a pre-specified level of node knowledge and computational capacity. We define the optimization information each node receives from its objective function, the neighboring information each node receives from its neighbors, and the computational capacity each node can take advantage of in controlling its state. It is proven that there exist a neighboring information way and a control law that guarantee global optimal consensus if and only if the solution sets of the local objective functions admit a nonempty intersection set for fixed strongly connected graphs. Then we show that for any tolerated error, we can find a control law that guarantees global optimal consensus within this error for fixed, bidirectional, and connected graphs under mild conditions. For time-varying graphs, we show that optimal consensus can always be achieved as long as the graph is uniformly jointly strongly connected and the nonempty intersection condition holds. The results illustrate that nonempty intersection for the local optimal solution sets is a critical condition for successful distributed optimization for a large class of algorithms

    Network Inference via the Time-Varying Graphical Lasso

    Full text link
    Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability

    Reciprocity-driven Sparse Network Formation

    Full text link
    A resource exchange network is considered, where exchanges among nodes are based on reciprocity. Peers receive from the network an amount of resources commensurate with their contribution. We assume the network is fully connected, and impose sparsity constraints on peer interactions. Finding the sparsest exchanges that achieve a desired level of reciprocity is in general NP-hard. To capture near-optimal allocations, we introduce variants of the Eisenberg-Gale convex program with sparsity penalties. We derive decentralized algorithms, whereby peers approximately compute the sparsest allocations, by reweighted l1 minimization. The algorithms implement new proportional-response dynamics, with nonlinear pricing. The trade-off between sparsity and reciprocity and the properties of graphs induced by sparse exchanges are examined.Comment: 19 page
    • …
    corecore