5,049 research outputs found
A Trust region interior point algorithm for linearly constrained optimization
Projet PROMATHWe present an extension for nonlinear optimization under linear constraints, of an algorithm for quadratic programming using a trust region idea, introduced by Ye and Tse and extended by Bonnans and Bouhtou. Due to the nonliearity of the cost we use a linesearch in order to reduce the step if necessary. We prove that, under suitable hypotheses, the algorithm converges to a point satisfying the first-order optimality system and we analyse under which conditions the unit stepsize will be asymptotically accepted
An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow
A novel trust region method for solving linearly constrained nonlinear
programs is presented. The proposed technique is amenable to a distributed
implementation, as its salient ingredient is an alternating projected gradient
sweep in place of the Cauchy point computation. It is proven that the algorithm
yields a sequence that globally converges to a critical point. As a result of
some changes to the standard trust region method, namely a proximal
regularisation of the trust region subproblem, it is shown that the local
convergence rate is linear with an arbitrarily small ratio. Thus, convergence
is locally almost superlinear, under standard regularity assumptions. The
proposed method is successfully applied to compute local solutions to
alternating current optimal power flow problems in transmission and
distribution networks. Moreover, the new mechanism for computing a Cauchy point
compares favourably against the standard projected search as for its activity
detection properties
Hessian barrier algorithms for linearly constrained optimization problems
In this paper, we propose an interior-point method for linearly constrained
optimization problems (possibly nonconvex). The method - which we call the
Hessian barrier algorithm (HBA) - combines a forward Euler discretization of
Hessian Riemannian gradient flows with an Armijo backtracking step-size policy.
In this way, HBA can be seen as an alternative to mirror descent (MD), and
contains as special cases the affine scaling algorithm, regularized Newton
processes, and several other iterative solution methods. Our main result is
that, modulo a non-degeneracy condition, the algorithm converges to the
problem's set of critical points; hence, in the convex case, the algorithm
converges globally to the problem's minimum set. In the case of linearly
constrained quadratic programs (not necessarily convex), we also show that the
method's convergence rate is for some
that depends only on the choice of kernel function (i.e., not on the problem's
primitives). These theoretical results are validated by numerical experiments
in standard non-convex test functions and large-scale traffic assignment
problems.Comment: 27 pages, 6 figure
Solving Mathematical Programs with Equilibrium Constraints as Nonlinear Programming: A New Framework
We present a new framework for the solution of mathematical programs with
equilibrium constraints (MPECs). In this algorithmic framework, an MPECs is
viewed as a concentration of an unconstrained optimization which minimizes the
complementarity measure and a nonlinear programming with general constraints. A
strategy generalizing ideas of Byrd-Omojokun's trust region method is used to
compute steps. By penalizing the tangential constraints into the objective
function, we circumvent the problem of not satisfying MFCQ. A trust-funnel-like
strategy is used to balance the improvements on feasibility and optimality. We
show that, under MPEC-MFCQ, if the algorithm does not terminate in finite
steps, then at least one accumulation point of the iterates sequence is an
S-stationary point
A Primal-Dual Augmented Lagrangian
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primal-dual variants of conventional primal methods are proposed: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual 1 linearly constrained Lagrangian (pd1-LCL) method
- …