10,280 research outputs found
An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow
A novel trust region method for solving linearly constrained nonlinear
programs is presented. The proposed technique is amenable to a distributed
implementation, as its salient ingredient is an alternating projected gradient
sweep in place of the Cauchy point computation. It is proven that the algorithm
yields a sequence that globally converges to a critical point. As a result of
some changes to the standard trust region method, namely a proximal
regularisation of the trust region subproblem, it is shown that the local
convergence rate is linear with an arbitrarily small ratio. Thus, convergence
is locally almost superlinear, under standard regularity assumptions. The
proposed method is successfully applied to compute local solutions to
alternating current optimal power flow problems in transmission and
distribution networks. Moreover, the new mechanism for computing a Cauchy point
compares favourably against the standard projected search as for its activity
detection properties
Numerical Analysis
Acknowledgements: This article will appear in the forthcoming Princeton Companion to Mathematics, edited by Timothy Gowers with June Barrow-Green, to be published by Princeton University Press.\ud
\ud
In preparing this essay I have benefitted from the advice of many colleagues who corrected a number of errors of fact and emphasis. I have not always followed their advice, however, preferring as one friend put it, to "put my head above the parapet". So I must take full responsibility for errors and omissions here.\ud
\ud
With thanks to: Aurelio Arranz, Alexander Barnett, Carl de Boor, David Bindel, Jean-Marc Blanc, Mike Bochev, Folkmar Bornemann, Richard Brent, Martin Campbell-Kelly, Sam Clark, Tim Davis, Iain Duff, Stan Eisenstat, Don Estep, Janice Giudice, Gene Golub, Nick Gould, Tim Gowers, Anne Greenbaum, Leslie Greengard, Martin Gutknecht, Raphael Hauser, Des Higham, Nick Higham, Ilse Ipsen, Arieh Iserles, David Kincaid, Louis Komzsik, David Knezevic, Dirk Laurie, Randy LeVeque, Bill Morton, John C Nash, Michael Overton, Yoshio Oyanagi, Beresford Parlett, Linda Petzold, Bill Phillips, Mike Powell, Alex Prideaux, Siegfried Rump, Thomas Schmelzer, Thomas Sonar, Hans Stetter, Gil Strang, Endre SĂĽli, Defeng Sun, Mike Sussman, Daniel Szyld, Garry Tee, Dmitry Vasilyev, Andy Wathen, Margaret Wright and Steve Wright
Bundle-based pruning in the max-plus curse of dimensionality free method
Recently a new class of techniques termed the max-plus curse of
dimensionality-free methods have been developed to solve nonlinear optimal
control problems. In these methods the discretization in state space is avoided
by using a max-plus basis expansion of the value function. This requires
storing only the coefficients of the basis functions used for representation.
However, the number of basis functions grows exponentially with respect to the
number of time steps of propagation to the time horizon of the control problem.
This so called "curse of complexity" can be managed by applying a pruning
procedure which selects the subset of basis functions that contribute most to
the approximation of the value function. The pruning procedures described thus
far in the literature rely on the solution of a sequence of high dimensional
optimization problems which can become computationally expensive.
In this paper we show that if the max-plus basis functions are linear and the
region of interest in state space is convex, the pruning problem can be
efficiently solved by the bundle method. This approach combining the bundle
method and semidefinite formulations is applied to the quantum gate synthesis
problem, in which the state space is the special unitary group (which is
non-convex). This is based on the observation that the convexification of the
unitary group leads to an exact relaxation. The results are studied and
validated via examples
Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints
This paper investigates the relation between sequential convex programming
(SCP) as, e.g., defined in [24] and DC (difference of two convex functions)
programming. We first present an SCP algorithm for solving nonlinear
optimization problems with DC constraints and prove its convergence. Then we
combine the proposed algorithm with a relaxation technique to handle
inconsistent linearizations. Numerical tests are performed to investigate the
behaviour of the class of algorithms.Comment: 18 pages, 1 figur
A decomposition procedure based on approximate newton directions
The efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable) preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the resulting procedures with direct methods, as well as to study the impact of different preconditioner choices
Large Scale Computational Problems in Numerical Optimization
Our work under this support broadly falls into five categories: automatic differentiation, sparsity, constraints, parallel computation, and applications. Automatic Differentiation (AD): We developed strong practical methods for computing sparse Jacobian and Hessian matrices which arise frequently in large scale optimization problems [10,35]. In addition, we developed a novel view of "structure" in applied problems along with AD techniques that allowed for the efficient application of sparse AD techniques to dense, but structured, problems. Our AD work included development of freely available MATLAB AD software. Sparsity: We developed new effective and practical techniques for exploiting sparsity when solving a variety of optimization problems. These problems include: bound constrained problems, robust regression problems, the null space problem, and sparse orthogonal factorization. Our sparsity work included development of freely available and published software [38,39]. Constraints: Effectively handling constraints in large scale optimization remains a challenge. We developed a number of new approaches to constrained problems with emphasis on trust region methodologies. Parallel Computation: Our work included the development of specifically parallel techniques for the linear algebra tasks underpinning optimization algorithms. Our work contributed to the nonlinear least-squares problem, nonlinear equations, triangular systems, orthogonalization, and linear programming. Applications: Our optimization work is broadly applicable across numerous application domains. Nevertheless we have specifically worked in several application areas including molecular conformation, molecular energy minimization, computational finance, and bone remodeling
- …