57 research outputs found

    Distributed Big-Data Optimization via Block Communications

    Get PDF
    We study distributed multi-agent large-scale optimization problems, wherein the cost function is composed of a smooth possibly nonconvex sum-utility plus a DC (Difference-of-Convex) regularizer. We consider the scenario where the dimension of the optimization variables is so large that optimizing and/or transmitting the entire set of variables could cause unaffordable computation and communication overhead. To address this issue, we propose the first distributed algorithm whereby agents optimize and communicate only a portion of their local variables. The scheme hinges on successive convex approximation (SCA) to handle the nonconvexity of the objective function, coupled with a novel block-signal tracking scheme, aiming at locally estimating the average of the agents' gradients. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Numerical results on a sparse regression problem show the effectiveness of the proposed algorithm and the impact of the block size on its practical convergence speed and communication cost

    Nested Distributed Gradient Methods with Adaptive Quantized Communication

    Full text link
    In this paper, we consider minimizing a sum of local convex objective functions in a distributed setting, where communication can be costly. We propose and analyze a class of nested distributed gradient methods with adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing multiple quantized communication steps on the rate of convergence and on the size of the neighborhood of convergence, and prove R-Linear convergence to the exact solution with increasing number of consensus steps and adaptive quantization. We test the performance of the method, as well as some practical variants, on quadratic functions, and show the effects of multiple quantized communication steps in terms of iterations/gradient evaluations, communication and cost.Comment: 9 pages, 2 figures. arXiv admin note: text overlap with arXiv:1709.0299

    On the Connectivity of Unions of Random Graphs

    Full text link
    Graph-theoretic tools and techniques have seen wide use in the multi-agent systems literature, and the unpredictable nature of some multi-agent communications has been successfully modeled using random communication graphs. Across both network control and network optimization, a common assumption is that the union of agents' communication graphs is connected across any finite interval of some prescribed length, and some convergence results explicitly depend upon this length. Despite the prevalence of this assumption and the prevalence of random graphs in studying multi-agent systems, to the best of our knowledge, there has not been a study dedicated to determining how many random graphs must be in a union before it is connected. To address this point, this paper solves two related problems. The first bounds the number of random graphs required in a union before its expected algebraic connectivity exceeds the minimum needed for connectedness. The second bounds the probability that a union of random graphs is connected. The random graph model used is the Erd\H{o}s-R\'enyi model, and, in solving these problems, we also bound the expectation and variance of the algebraic connectivity of unions of such graphs. Numerical results for several use cases are given to supplement the theoretical developments made.Comment: 16 pages, 3 tables; accepted to 2017 IEEE Conference on Decision and Control (CDC

    Bregman Proximal Gradient Algorithm with Extrapolation for a class of Nonconvex Nonsmooth Minimization Problems

    Get PDF
    In this paper, we consider an accelerated method for solving nonconvex and nonsmooth minimization problems. We propose a Bregman Proximal Gradient algorithm with extrapolation(BPGe). This algorithm extends and accelerates the Bregman Proximal Gradient algorithm (BPG), which circumvents the restrictive global Lipschitz gradient continuity assumption needed in Proximal Gradient algorithms (PG). The BPGe algorithm has higher generality than the recently introduced Proximal Gradient algorithm with extrapolation(PGe), and besides, due to the extrapolation step, BPGe converges faster than BPG algorithm. Analyzing the convergence, we prove that any limit point of the sequence generated by BPGe is a stationary point of the problem by choosing parameters properly. Besides, assuming Kurdyka-{\'L}ojasiewicz property, we prove the whole sequences generated by BPGe converges to a stationary point. Finally, to illustrate the potential of the new method BPGe, we apply it to two important practical problems that arise in many fundamental applications (and that not satisfy global Lipschitz gradient continuity assumption): Poisson linear inverse problems and quadratic inverse problems. In the tests the accelerated BPGe algorithm shows faster convergence results, giving an interesting new algorithm.Comment: Preprint submitted for publication, February 14, 201
    • …
    corecore