255,007 research outputs found
Finite-Time Convergent Algorithms for Time-Varying Distributed Optimization
This paper focuses on finite-time (FT) convergent distributed algorithms for
solving time-varying distributed optimization (TVDO). The objective is to
minimize the sum of local time-varying cost functions subject to the possible
time-varying constraints by the coordination of multiple agents in finite time.
We first provide a unified approach for designing finite/fixed-time convergent
algorithms to solve centralized time-varying optimization, where an auxiliary
dynamics is introduced to achieve prescribed performance. Then, two classes of
TVDO are investigated included unconstrained distributed consensus optimization
and distributed optimal resource allocation problems (DORAP) with both
time-varying cost functions and coupled equation constraints. For the previous
one, based on nonsmooth analysis, a continuous-time distributed discontinuous
dynamics with FT convergence is proposed based on an extended zero-gradient-sum
method with a local auxiliary subsystem. Different from the existing methods,
the proposed algorithm does not require the initial state of each agent to be
the optimizer of the local cost function. Moreover, the provided algorithm has
a simpler structure without estimating the global information and can be used
for TVDO with nonidentical Hessians. Then, an FT convergent distributed
dynamics is further obtained for time-varying DORAP by dual transformation.
Particularly, the inverse of Hessians is not required from a dual perspective,
which reduces the computation complexity significantly. Finally, two numerical
examples are conducted to verify the proposed algorithms
Finite-Time Distributed Optimization with Quantized Gradient Descent
In this paper, we consider the unconstrained distributed optimization
problem, in which the exchange of information in the network is captured by a
directed graph topology, and thus nodes can send information to their
out-neighbors only. Additionally, the communication channels among the nodes
have limited bandwidth, to alleviate the limitation, quantized messages should
be exchanged among the nodes. For solving the distributed optimization problem,
we combine a distributed quantized consensus algorithm (which requires the
nodes to exchange quantized messages and converges in a finite number of steps)
with a gradient descent method. Specifically, at every optimization step, each
node performs a gradient descent step (i.e., subtracts the scaled gradient from
its current estimate), and then performs a finite-time calculation of the
quantized average of every node's estimate in the network. As a consequence,
this algorithm approximately mimics the centralized gradient descent algorithm.
The performance of the proposed algorithm is demonstrated via simple
illustrative examples
A duality-based approach for distributed min-max optimization with application to demand side management
In this paper we consider a distributed optimization scenario in which a set
of processors aims at minimizing the maximum of a collection of "separable
convex functions" subject to local constraints. This set-up is motivated by
peak-demand minimization problems in smart grids. Here, the goal is to minimize
the peak value over a finite horizon with: (i) the demand at each time instant
being the sum of contributions from different devices, and (ii) the local
states at different time instants being coupled through local dynamics. The
min-max structure and the double coupling (through the devices and over the
time horizon) makes this problem challenging in a distributed set-up (e.g.,
well-known distributed dual decomposition approaches cannot be applied). We
propose a distributed algorithm based on the combination of duality methods and
properties from min-max optimization. Specifically, we derive a series of
equivalent problems by introducing ad-hoc slack variables and by going back and
forth from primal and dual formulations. On the resulting problem we apply a
dual subgradient method, which turns out to be a distributed algorithm. We
prove the correctness of the proposed algorithm and show its effectiveness via
numerical computations.Comment: arXiv admin note: substantial text overlap with arXiv:1611.0916
Differentially Private Distributed Stochastic Optimization with Time-Varying Sample Sizes
Differentially private distributed stochastic optimization has become a hot
topic due to the urgent need of privacy protection in distributed stochastic
optimization. In this paper, two-time scale stochastic approximation-type
algorithms for differentially private distributed stochastic optimization with
time-varying sample sizes are proposed using gradient- and output-perturbation
methods. For both gradient- and output-perturbation cases, the convergence of
the algorithm and differential privacy with a finite cumulative privacy budget
for an infinite number of iterations are simultaneously
established, which is substantially different from the existing works. By a
time-varying sample sizes method, the privacy level is enhanced, and
differential privacy with a finite cumulative privacy budget for
an infinite number of iterations is established. By properly choosing a
Lyapunov function, the algorithm achieves almost-sure and mean-square
convergence even when the added privacy noises have an increasing variance.
Furthermore, we rigorously provide the mean-square convergence rates of the
algorithm and show how the added privacy noise affects the convergence rate of
the algorithm. Finally, numerical examples including distributed training on a
benchmark machine learning dataset are presented to demonstrate the efficiency
and advantages of the algorithms
- …