8,909 research outputs found
On convergence properties of a subgradient method
In this article, we consider convergence properties of the normalized subgradient method which employs the stepsize rule based on a priori knowledge of the optimal value of the cost function. We show that the normalized subgradients possess additional information about the problem under consideration, which can be used for improving convergence rates based on the usual subgradient properties. We also present several convergence results for inexact versions of the method
A Distributed Newton Method for Network Utility Maximization
Most existing work uses dual decomposition and subgradient methods to solve
Network Utility Maximization (NUM) problems in a distributed manner, which
suffer from slow rate of convergence properties. This work develops an
alternative distributed Newton-type fast converging algorithm for solving
network utility maximization problems with self-concordant utility functions.
By using novel matrix splitting techniques, both primal and dual updates for
the Newton step can be computed using iterative schemes in a decentralized
manner with limited information exchange. Similarly, the stepsize can be
obtained via an iterative consensus-based averaging scheme. We show that even
when the Newton direction and the stepsize in our method are computed within
some error (due to finite truncation of the iterative schemes), the resulting
objective function value still converges superlinearly to an explicitly
characterized error neighborhood. Simulation results demonstrate significant
convergence rate improvement of our algorithm relative to the existing
subgradient methods based on dual decomposition.Comment: 27 pages, 4 figures, LIDS report, submitted to CDC 201
Lagrangean decomposition for large-scale two-stage stochastic mixed 0-1 problems
In this paper we study solution methods for solving the dual problem corresponding to the Lagrangean Decomposition of two stage stochastic mixed 0-1 models. We represent the two stage stochastic mixed 0-1 problem by a splitting variable representation of the deterministic equivalent model, where 0-1 and continuous variables appear at any stage. Lagrangean Decomposition is proposed for satisfying both the integrality constraints for the 0-1 variables and the non-anticipativity constraints. We compare the performance of four iterative algorithms based on dual Lagrangean Decomposition schemes, as the Subgradient method, the Volume algorithm, the Progressive Hedging algorithm and the Dynamic Constrained Cutting Plane scheme. We test the conditions and properties of convergence for medium and large-scale dimension stochastic problems. Computational results are reported.Progressive Hedging algorithm, volume algorithm, Lagrangean decomposition, subgradient method
A Study On Distributed Model Predictive Consensus
We investigate convergence properties of a proposed distributed model
predictive control (DMPC) scheme, where agents negotiate to compute an optimal
consensus point using an incremental subgradient method based on primal
decomposition as described in Johansson et al. [2006, 2007]. The objective of
the distributed control strategy is to agree upon and achieve an optimal common
output value for a group of agents in the presence of constraints on the agent
dynamics using local predictive controllers. Stability analysis using a
receding horizon implementation of the distributed optimal consensus scheme is
performed. Conditions are given under which convergence can be obtained even if
the negotiations do not reach full consensus.Comment: 20 pages, 4 figures, longer version of paper presented at 17th IFAC
World Congres
- …