41 research outputs found

    Distributed Online Optimization with Coupled Inequality Constraints over Unbalanced Directed Networks

    Full text link
    This paper studies a distributed online convex optimization problem, where agents in an unbalanced network cooperatively minimize the sum of their time-varying local cost functions subject to a coupled inequality constraint. To solve this problem, we propose a distributed dual subgradient tracking algorithm, called DUST, which attempts to optimize a dual objective by means of tracking the primal constraint violations and integrating dual subgradient and push sum techniques. Different from most existing works, we allow the underlying network to be unbalanced with a column stochastic mixing matrix. We show that DUST achieves sublinear dynamic regret and constraint violations, provided that the accumulated variation of the optimal sequence grows sublinearly. If the standard Slater's condition is additionally imposed, DUST acquires a smaller constraint violation bound than the alternative existing methods applicable to unbalanced networks. Simulations on a plug-in electric vehicle charging problem demonstrate the superior convergence of DUST

    Push-Pull Based Distributed Primal-Dual Algorithm for Coupled Constrained Convex Optimization in Multi-Agent Networks

    Full text link
    This paper focuses on a distributed coupled constrained convex optimization problem over directed unbalanced and time-varying multi-agent networks, where the global objective function is the sum of all agents' private local objective functions, and decisions of all agents are subject to coupled equality and inequality constraints and a compact convex subset. In the multi-agent networks, each agent exchanges information with other neighboring agents. Finally, all agents reach a consensus on decisions, meanwhile achieving the goal of minimizing the global objective function under the given constraint conditions. For the purpose of protecting the information privacy of each agent, we first establish the saddle point problem of the constrained convex optimization problem considered in this article, then based on the push-pull method, develop a distributed primal-dual algorithm to solve the dual problem. Under Slater's condition, we will show that the sequence of points generated by the proposed algorithm converges to a saddle point of the Lagrange function. Moreover, we analyze the iteration complexity of the algorithm

    Distributed Aggregative Optimization over Multi-Agent Networks

    Full text link
    This paper proposes a new framework for distributed optimization, called distributed aggregative optimization, which allows local objective functions to be dependent not only on their own decision variables, but also on the average of summable functions of decision variables of all other agents. To handle this problem, a distributed algorithm, called distributed gradient tracking (DGT), is proposed and analyzed, where the global objective function is strongly convex, and the communication graph is balanced and strongly connected. It is shown that the algorithm can converge to the optimal variable at a linear rate. A numerical example is provided to corroborate the theoretical result

    Implicit Tracking-based Distributed Constraint-coupled Optimization

    Full text link
    A class of distributed optimization problem with a globally coupled equality constraint and local constrained sets is studied in this paper. For its special case where local constrained sets are absent, an augmented primal-dual gradient dynamics is proposed and analyzed, but it cannot be implemented distributedly since the violation of the coupled constraint needs to be used. Benefiting from the brand-new comprehending of a classical distributed unconstrained optimization algorithm, the novel implicit tracking approach is proposed to track the violation distributedly, which leads to the birth of the \underline{i}mplicit tracking-based \underline{d}istribut\underline{e}d \underline{a}ugmented primal-dual gradient dynamics (IDEA). A projected variant of IDEA, i.e., Proj-IDEA, is further designed to deal with the general case where local constrained sets exist. With the aid of the Lyapunov stability theory, the convergences of IDEA and Pro-IDEA over undigraphs and digraphs are analyzed respectively. As far as we know, Proj-IDEA is the first constant step-size distributed algorithm which can solve the studied problem without the need of the strict convexity of local cost functions. Besides, if local cost functions are strongly convex and smooth, IDEA can achieve exponential convergence with a weaker condition about the coupled constraint. Finally, numerical experiments are taken to corroborate our theoretical results.Comment: in IEEE Transactions on Control of Network Systems, 202

    Distributed Online Convex Optimization with an Aggregative Variable

    Full text link
    This paper investigates distributed online convex optimization in the presence of an aggregative variable without any global/central coordinators over a multi-agent network, where each individual agent is only able to access partial information of time-varying global loss functions, thus requiring local information exchanges between neighboring agents. Motivated by many applications in reality, the considered local loss functions depend not only on their own decision variables, but also on an aggregative variable, such as the average of all decision variables. To handle this problem, an Online Distributed Gradient Tracking algorithm (O-DGT) is proposed with exact gradient information and it is shown that the dynamic regret is upper bounded by three terms: a sublinear term, a path variation term, and a gradient variation term. Meanwhile, the O-DGT algorithm is also analyzed with stochastic/noisy gradients, showing that the expected dynamic regret has the same upper bound as the exact gradient case. To our best knowledge, this paper is the first to study online convex optimization in the presence of an aggregative variable, which enjoys new characteristics in comparison with the conventional scenario without the aggregative variable. Finally, a numerical experiment is provided to corroborate the obtained theoretical results
    corecore