76 research outputs found
On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
In this paper we propose a distributed dual gradient algorithm for minimizing
linearly constrained separable convex problems and analyze its rate of
convergence. In particular, we prove that under the assumption of strong
convexity and Lipshitz continuity of the gradient of the primal objective
function we have a global error bound type property for the dual problem. Using
this error bound property we devise a fully distributed dual gradient scheme,
i.e. a gradient scheme based on a weighted step size, for which we derive
global linear rate of convergence for both dual and primal suboptimality and
for primal feasibility violation. Many real applications, e.g. distributed
model predictive control, network utility maximization or optimal power flow,
can be posed as linearly constrained separable convex problems for which dual
gradient type methods from literature have sublinear convergence rate. In the
present paper we prove for the first time that in fact we can achieve linear
convergence rate for such algorithms when they are used for solving these
applications. Numerical simulations are also provided to confirm our theory.Comment: 14 pages, 4 figures, submitted to Automatica Journal, February 2014.
arXiv admin note: substantial text overlap with arXiv:1401.4398. We revised
the paper, adding more simulations and checking for typo
Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems
In this paper we propose and analyze two dual methods based on inexact
gradient information and averaging that generate approximate primal solutions
for smooth convex optimization problems. The complicating constraints are moved
into the cost using the Lagrange multipliers. The dual problem is solved by
inexact first order methods based on approximate gradients and we prove
sublinear rate of convergence for these methods. In particular, we provide, for
the first time, estimates on the primal feasibility violation and primal and
dual suboptimality of the generated approximate primal and dual solutions.
Moreover, we solve approximately the inner problems with a parallel coordinate
descent algorithm and we show that it has linear convergence rate. In our
analysis we rely on the Lipschitz property of the dual function and inexact
dual gradients. Further, we apply these methods to distributed model predictive
control for network systems. By tightening the complicating constraints we are
also able to ensure the primal feasibility of the approximate solutions
generated by the proposed algorithms. We obtain a distributed control strategy
that has the following features: state and input constraints are satisfied,
stability of the plant is guaranteed, whilst the number of iterations for the
suboptimal solution can be precisely determined.Comment: 26 pages, 2 figure
- …