1,116 research outputs found
Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems
In this paper we propose and analyze two dual methods based on inexact
gradient information and averaging that generate approximate primal solutions
for smooth convex optimization problems. The complicating constraints are moved
into the cost using the Lagrange multipliers. The dual problem is solved by
inexact first order methods based on approximate gradients and we prove
sublinear rate of convergence for these methods. In particular, we provide, for
the first time, estimates on the primal feasibility violation and primal and
dual suboptimality of the generated approximate primal and dual solutions.
Moreover, we solve approximately the inner problems with a parallel coordinate
descent algorithm and we show that it has linear convergence rate. In our
analysis we rely on the Lipschitz property of the dual function and inexact
dual gradients. Further, we apply these methods to distributed model predictive
control for network systems. By tightening the complicating constraints we are
also able to ensure the primal feasibility of the approximate solutions
generated by the proposed algorithms. We obtain a distributed control strategy
that has the following features: state and input constraints are satisfied,
stability of the plant is guaranteed, whilst the number of iterations for the
suboptimal solution can be precisely determined.Comment: 26 pages, 2 figure
Inexact Bregman iteration with an application to Poisson data reconstruction
This work deals with the solution of image restoration problems by an
iterative regularization method based on the Bregman iteration. Any iteration of this
scheme requires to exactly compute the minimizer of a function. However, in some
image reconstruction applications, it is either impossible or extremely expensive to
obtain exact solutions of these subproblems. In this paper, we propose an inexact
version of the iterative procedure, where the inexactness in the inner subproblem
solution is controlled by a criterion that preserves the convergence of the Bregman
iteration and its features in image restoration problems. In particular, the method
allows to obtain accurate reconstructions also when only an overestimation of the
regularization parameter is known. The introduction of the inexactness in the iterative
scheme allows to address image reconstruction problems from data corrupted by
Poisson noise, exploiting the recent advances about specialized algorithms for the
numerical minimization of the generalized Kullback–Leibler divergence combined with
a regularization term. The results of several numerical experiments enable to evaluat
A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework
We propose a new family of subgradient- and gradient-based methods which
converges with optimal complexity for convex optimization problems whose
feasible region is simple enough. This includes cases where the objective
function is non-smooth, smooth, have composite/saddle structure, or are given
by an inexact oracle model. We unified the way of constructing the subproblems
which are necessary to be solved at each iteration of these methods. This
permitted us to analyze the convergence of these methods in a unified way
compared to previous results which required different approaches for each
method/algorithm. Our contribution rely on two well-known methods in non-smooth
convex optimization: the mirror-descent method by Nemirovski-Yudin and the
dual-averaging method by Nesterov. Therefore, our family of methods includes
them and many other methods as particular cases. For instance, the proposed
family of classical gradient methods and its accelerations generalize Devolder
et al.'s, Nesterov's primal/dual gradient methods, and Tseng's accelerated
proximal gradient methods. Also our family of methods can partially become
special cases of other universal methods, too. As an additional contribution,
the novel extended mirror-descent method removes the compactness assumption of
the feasible region and the fixation of the total number of iterations which is
required by the original mirror-descent method in order to attain the optimal
complexity.Comment: 31 pages. v3: Major revision. Research Report B-477, Department of
Mathematical and Computing Sciences, Tokyo Institute of Technology, February
201
A Distributed Newton Method for Network Utility Maximization
Most existing work uses dual decomposition and subgradient methods to solve
Network Utility Maximization (NUM) problems in a distributed manner, which
suffer from slow rate of convergence properties. This work develops an
alternative distributed Newton-type fast converging algorithm for solving
network utility maximization problems with self-concordant utility functions.
By using novel matrix splitting techniques, both primal and dual updates for
the Newton step can be computed using iterative schemes in a decentralized
manner with limited information exchange. Similarly, the stepsize can be
obtained via an iterative consensus-based averaging scheme. We show that even
when the Newton direction and the stepsize in our method are computed within
some error (due to finite truncation of the iterative schemes), the resulting
objective function value still converges superlinearly to an explicitly
characterized error neighborhood. Simulation results demonstrate significant
convergence rate improvement of our algorithm relative to the existing
subgradient methods based on dual decomposition.Comment: 27 pages, 4 figures, LIDS report, submitted to CDC 201
Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication
We study distributed optimization in networked systems, where nodes cooperate
to find the optimal quantity of common interest, x=x^\star. The objective
function of the corresponding optimization problem is the sum of private (known
only by a node,) convex, nodes' objectives and each node imposes a private
convex constraint on the allowed values of x. We solve this problem for generic
connected network topologies with asymmetric random link failures with a novel
distributed, decentralized algorithm. We refer to this algorithm as AL-G
(augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented
Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast
gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual
function. Dual variables are updated by the standard method of multipliers, at
a slow time scale. To update the primal variables, we propose a novel,
Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses
unidirectional gossip communication, only between immediate neighbors in the
network and is resilient to random link failures. For networks with reliable
communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian
broadcast gossiping) algorithm reduces communication, computation and data
storage cost. We prove convergence for all proposed algorithms and demonstrate
by simulations the effectiveness on two applications: l_1-regularized logistic
regression for classification and cooperative spectrum sensing for cognitive
radio networks.Comment: 28 pages, journal; revise
A Duality-Based Approach for Distributed Optimization with Coupling Constraints
In this paper we consider a distributed optimization scenario in which a set
of agents has to solve a convex optimization problem with separable cost
function, local constraint sets and a coupling inequality constraint. We
propose a novel distributed algorithm based on a relaxation of the primal
problem and an elegant exploration of duality theory. Despite its complex
derivation based on several duality steps, the distributed algorithm has a very
simple and intuitive structure. That is, each node solves a local version of
the original problem relaxation, and updates suitable dual variables. We prove
the algorithm correctness and show its effectiveness via numerical
computations
Multi-Agent Distributed Optimization via Inexact Consensus ADMM
Multi-agent distributed consensus optimization problems arise in many signal
processing applications. Recently, the alternating direction method of
multipliers (ADMM) has been used for solving this family of problems. ADMM
based distributed optimization method is shown to have faster convergence rate
compared with classic methods based on consensus subgradient, but can be
computationally expensive, especially for problems with complicated structures
or large dimensions. In this paper, we propose low-complexity algorithms that
can reduce the overall computational cost of consensus ADMM by an order of
magnitude for certain large-scale problems. Central to the proposed algorithms
is the use of an inexact step for each ADMM update, which enables the agents to
perform cheap computation at each iteration. Our convergence analyses show that
the proposed methods converge well under some convexity assumptions. Numerical
results show that the proposed algorithms offer considerably lower
computational complexity than the standard ADMM based distributed optimization
methods.Comment: submitted to IEEE Trans. Signal Processing; Revised April 2014 and
August 201
- …