78 research outputs found

    Distributed optimization for multi-agent system over unbalanced graphs with linear convergence rate

    Get PDF
    summary:Distributed optimization over unbalanced graphs is an important problem in multi-agent systems. Most of literatures, by introducing some auxiliary variables, utilize the Push-Sum scheme to handle the widespread unbalance graph with row or column stochastic matrix only. But the introduced auxiliary dynamics bring more calculation and communication tasks. In this paper, based on the in-degree and out-degree information of each agent, we propose an innovative distributed optimization algorithm to reduce the calculation and communication complexity of the conventional Push-Sum scheme. Furthermore, with the aid of small gain theory, we prove the linear convergence rate of the proposed algorithm

    Smoothing algorithms for nonsmooth and nonconvex minimization over the stiefel manifold

    Full text link
    We consider a class of nonsmooth and nonconvex optimization problems over the Stiefel manifold where the objective function is the summation of a nonconvex smooth function and a nonsmooth Lipschitz continuous convex function composed with an linear mapping. We propose three numerical algorithms for solving this problem, by combining smoothing methods and some existing algorithms for smooth optimization over the Stiefel manifold. In particular, we approximate the aforementioned nonsmooth convex function by its Moreau envelope in our smoothing methods, and prove that the Moreau envelope has many favorable properties. Thanks to this and the scheme for updating the smoothing parameter, we show that any accumulation point of the solution sequence generated by the proposed algorithms is a stationary point of the original optimization problem. Numerical experiments on building graph Fourier basis are conducted to demonstrate the efficiency of the proposed algorithms.Comment: 22 page

    Convergence rate analysis of a subgradient averaging algorithm for distributed optimisation with different constraint sets

    Get PDF
    We consider a multi-agent setting with agents exchanging information over a network to solve a convex constrained optimisation problem in a distributed manner. We analyse a new algorithm based on local subgradient exchange under undirected time-varying communication. First, we prove asymptotic convergence of the iterates to a minimum of the given optimisation problem for time-varying step-sizes of the form c(k) = rac{eta }{{k + 1}}, for some \u3b7 > 0. We then restrict attention to step-size choices c(k) = rac{eta }{{sqrt {k + 1} }},eta > 0, and establish a convergence of mathcal{O}left( {rac{{ln (k)}}{{sqrt k }}} ight) in objective value. Our algorithm extends currently available distributed subgradient/proximal methods by: (i) accounting for different constraint sets at each node, and (ii) enhancing the convergence speed thanks to a subgradient averaging step performed by the agents. A numerical example demonstrates the efficacy of the proposed algorithm

    Distributed Aggregative Optimization over Multi-Agent Networks

    Full text link
    This paper proposes a new framework for distributed optimization, called distributed aggregative optimization, which allows local objective functions to be dependent not only on their own decision variables, but also on the average of summable functions of decision variables of all other agents. To handle this problem, a distributed algorithm, called distributed gradient tracking (DGT), is proposed and analyzed, where the global objective function is strongly convex, and the communication graph is balanced and strongly connected. It is shown that the algorithm can converge to the optimal variable at a linear rate. A numerical example is provided to corroborate the theoretical result

    Distributed Algorithms for Computing a Fixed Point of Multi-Agent Nonexpansive Operators

    Full text link
    This paper investigates the problem of finding a fixed point for a global nonexpansive operator under time-varying communication graphs in real Hilbert spaces, where the global operator is separable and composed of an aggregate sum of local nonexpansive operators. Each local operator is only privately accessible to each agent, and all agents constitute a network. To seek a fixed point of the global operator, it is indispensable for agents to exchange local information and update their solution cooperatively. To solve the problem, two algorithms are developed, called distributed Krasnosel'ski\u{\i}-Mann (D-KM) and distributed block-coordinate Krasnosel'ski\u{\i}-Mann (D-BKM) iterations, for which the D-BKM iteration is a block-coordinate version of the D-KM iteration in the sense of randomly choosing and computing only one block-coordinate of local operators at each time for each agent. It is shown that the proposed two algorithms can both converge weakly to a fixed point of the global operator. Meanwhile, the designed algorithms are applied to recover the classical distributed gradient descent (DGD) algorithm, devise a new block-coordinate DGD algorithm, handle a distributed shortest distance problem in the Hilbert space for the first time, and solve linear algebraic equations in a novel distributed approach. Finally, the theoretical results are corroborated by a few numerical examples

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
    corecore