139 research outputs found
Distributed Online Optimization with Coupled Inequality Constraints over Unbalanced Directed Networks
This paper studies a distributed online convex optimization problem, where
agents in an unbalanced network cooperatively minimize the sum of their
time-varying local cost functions subject to a coupled inequality constraint.
To solve this problem, we propose a distributed dual subgradient tracking
algorithm, called DUST, which attempts to optimize a dual objective by means of
tracking the primal constraint violations and integrating dual subgradient and
push sum techniques. Different from most existing works, we allow the
underlying network to be unbalanced with a column stochastic mixing matrix. We
show that DUST achieves sublinear dynamic regret and constraint violations,
provided that the accumulated variation of the optimal sequence grows
sublinearly. If the standard Slater's condition is additionally imposed, DUST
acquires a smaller constraint violation bound than the alternative existing
methods applicable to unbalanced networks. Simulations on a plug-in electric
vehicle charging problem demonstrate the superior convergence of DUST
Distributed optimization for multi-agent system over unbalanced graphs with linear convergence rate
summary:Distributed optimization over unbalanced graphs is an important problem in multi-agent systems. Most of literatures, by introducing some auxiliary variables, utilize the Push-Sum scheme to handle the widespread unbalance graph with row or column stochastic matrix only. But the introduced auxiliary dynamics bring more calculation and communication tasks. In this paper, based on the in-degree and out-degree information of each agent, we propose an innovative distributed optimization algorithm to reduce the calculation and communication complexity of the conventional Push-Sum scheme. Furthermore, with the aid of small gain theory, we prove the linear convergence rate of the proposed algorithm
Distributed robust optimization for multi-agent systems with guaranteed finite-time convergence
A novel distributed algorithm is proposed for finite-time converging to a
feasible consensus solution satisfying global optimality to a certain accuracy
of the distributed robust convex optimization problem (DRCO) subject to bounded
uncertainty under a uniformly strongly connected network. Firstly, a
distributed lower bounding procedure is developed, which is based on an outer
iterative approximation of the DRCO through the discretization of the compact
uncertainty set into a finite number of points. Secondly, a distributed upper
bounding procedure is proposed, which is based on iteratively approximating the
DRCO by restricting the constraints right-hand side with a proper positive
parameter and enforcing the compact uncertainty set at finitely many points.
The lower and upper bounds of the global optimal objective for the DRCO are
obtained from these two procedures. Thirdly, two distributed termination
methods are proposed to make all agents stop updating simultaneously by
exploring whether the gap between the upper and the lower bounds reaches a
certain accuracy. Fourthly, it is proved that all the agents finite-time
converge to a feasible consensus solution that satisfies global optimality
within a certain accuracy. Finally, a numerical case study is included to
illustrate the effectiveness of the distributed algorithm.Comment: Submitted for publication in Automatic
- …