67 research outputs found
On solving trust-region and other regularised subproblems in optimization
The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, Mor´e and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both globally and asymptotically at least superlinearly convergent in all cases, including in the notorious hard case. Numerical experiments validate the effectiveness of our approach. The resulting software is available as packages TRS and RQS as part of the GALAHAD optimization library, and is especially designed for large-scale problems
An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow
A novel trust region method for solving linearly constrained nonlinear
programs is presented. The proposed technique is amenable to a distributed
implementation, as its salient ingredient is an alternating projected gradient
sweep in place of the Cauchy point computation. It is proven that the algorithm
yields a sequence that globally converges to a critical point. As a result of
some changes to the standard trust region method, namely a proximal
regularisation of the trust region subproblem, it is shown that the local
convergence rate is linear with an arbitrarily small ratio. Thus, convergence
is locally almost superlinear, under standard regularity assumptions. The
proposed method is successfully applied to compute local solutions to
alternating current optimal power flow problems in transmission and
distribution networks. Moreover, the new mechanism for computing a Cauchy point
compares favourably against the standard projected search as for its activity
detection properties
Modelling and solution methods for stochastic optimisation
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.In this thesis we consider two research problems, namely, (i) language constructs
for modelling stochastic programming (SP) problems and (ii) solution methods
for processing instances of different classes of SP problems. We first describe a
new design of an SP modelling system which provides greater extensibility and
reuse. We implement this enhanced system and develop solver connections. We
also investigate in detail the following important classes of SP problems: singlestage
SP with risk constraints, two-stage linear and stochastic integer programming
problems. We report improvements to solution methods for single-stage problems with second-order stochastic dominance constraints and two-stage SP problems. In both cases we use the level method as a regularisation mechanism. We also develop novel heuristic methods for stochastic integer programming based on variable neighbourhood search. We describe an algorithmic framework for implementing
decomposition methods such as the L-shaped method within our SP solver system. Based on this framework we implement a number of established solution algorithms as well as a new regularisation method for stochastic linear programming. We compare the performance of these methods and their scale-up properties on an extensive set of benchmark problems. We also implement several
solution methods for stochastic integer programming and report a computational
study comparing their performance. The three solution methods, (a) processing of a single-stage problem with second-order stochastic dominance constraints, (b) regularisation by the level method for two-stage SP and (c) method for solving integer SP problems, are novel approaches and each of these makes a contribution to knowledge.Financial support was obtained from OptiRisk Systems
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
We consider variants of trust-region and cubic regularization methods for
non-convex optimization, in which the Hessian matrix is approximated. Under
mild conditions on the inexact Hessian, and using approximate solution of the
corresponding sub-problems, we provide iteration complexity to achieve -approximate second-order optimality which have shown to be tight.
Our Hessian approximation conditions constitute a major relaxation over the
existing ones in the literature. Consequently, we are able to show that such
mild conditions allow for the construction of the approximate Hessian through
various random sampling methods. In this light, we consider the canonical
problem of finite-sum minimization, provide appropriate uniform and non-uniform
sub-sampling strategies to construct such Hessian approximations, and obtain
optimal iteration complexity for the corresponding sub-sampled trust-region and
cubic regularization methods.Comment: 32 page
- …