13 research outputs found

    Distributed Aggregative Optimization over Multi-Agent Networks

    Full text link
    This paper proposes a new framework for distributed optimization, called distributed aggregative optimization, which allows local objective functions to be dependent not only on their own decision variables, but also on the average of summable functions of decision variables of all other agents. To handle this problem, a distributed algorithm, called distributed gradient tracking (DGT), is proposed and analyzed, where the global objective function is strongly convex, and the communication graph is balanced and strongly connected. It is shown that the algorithm can converge to the optimal variable at a linear rate. A numerical example is provided to corroborate the theoretical result

    Distributed Algorithms for Computing a Fixed Point of Multi-Agent Nonexpansive Operators

    Full text link
    This paper investigates the problem of finding a fixed point for a global nonexpansive operator under time-varying communication graphs in real Hilbert spaces, where the global operator is separable and composed of an aggregate sum of local nonexpansive operators. Each local operator is only privately accessible to each agent, and all agents constitute a network. To seek a fixed point of the global operator, it is indispensable for agents to exchange local information and update their solution cooperatively. To solve the problem, two algorithms are developed, called distributed Krasnosel'ski\u{\i}-Mann (D-KM) and distributed block-coordinate Krasnosel'ski\u{\i}-Mann (D-BKM) iterations, for which the D-BKM iteration is a block-coordinate version of the D-KM iteration in the sense of randomly choosing and computing only one block-coordinate of local operators at each time for each agent. It is shown that the proposed two algorithms can both converge weakly to a fixed point of the global operator. Meanwhile, the designed algorithms are applied to recover the classical distributed gradient descent (DGD) algorithm, devise a new block-coordinate DGD algorithm, handle a distributed shortest distance problem in the Hilbert space for the first time, and solve linear algebraic equations in a novel distributed approach. Finally, the theoretical results are corroborated by a few numerical examples

    Distributed Stochastic Subgradient Optimization Algorithms Over Random and Noisy Networks

    Full text link
    We study distributed stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. The network is modeled by a sequence of time-varying random digraphs with each node representing a local optimizer and each edge representing a communication link. We consider the distributed subgradient optimization algorithm with noisy measurements of local cost functions' subgradients, additive and multiplicative noises among information exchanging between each pair of nodes. By stochastic Lyapunov method, convex analysis, algebraic graph theory and martingale convergence theory, it is proved that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes' states converge to the global optimal solution almost surely
    corecore