7,492 research outputs found

    An investigation of new methods for estimating parameter sensitivities

    Get PDF
    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity

    Adjoint-based predictor-corrector sequential convex programming for parametric nonlinear optimization

    Full text link
    This paper proposes an algorithmic framework for solving parametric optimization problems which we call adjoint-based predictor-corrector sequential convex programming. After presenting the algorithm, we prove a contraction estimate that guarantees the tracking performance of the algorithm. Two variants of this algorithm are investigated. The first one can be used to solve nonlinear programming problems while the second variant is aimed to treat online parametric nonlinear programming problems. The local convergence of these variants is proved. An application to a large-scale benchmark problem that originates from nonlinear model predictive control of a hydro power plant is implemented to examine the performance of the algorithms.Comment: This manuscript consists of 25 pages and 7 figure

    Pulsar Algorithms: A Class of Coarse-Grain Parallel Nonlinear Optimization Algorithms

    Get PDF
    Parallel architectures of modern computers formed of processors with high computing power motivate the search for new approaches to basic computational algorithms. Another motivating force for parallelization of algorithms has been the need to solve very large scale or complex problems. However, the complexity of a mathematical programming problem is not necessarily due to its scale or dimension; thus, we should search also for new parallel computation approaches to problems that might have a moderate size but are difficult for other reasons. One of such approaches might be coarse-grained parallelization based on a parametric imbedding of an algorithm and on an allocation of resulting algorithmic phases and variants to many processors with suitable coordination of data obtained that way. Each processor performs then a phase of the algorithm -- a substantial computational task which mitigates the problems related to data transmission and coordination. The paper presents a class of such coarse-grained parallel algorithms for unconstrained nonlinear optimization, called pulsar algorithms since the approximations of an optimal solution alternatively increase and reduce their spread in subsequent iterations. The main algorithmic phase of an algorithm of this class might be either a directional search or a restricted step determination in a trust region method. This class is exemplified by a modified, parallel Newton-type algorithm and a parallel rank-one variable metric algorithm. In the latter case, a consistent approximation of the inverse of the hessian matrix based on parallel produced data is available at each iteration, while the known deficiencies of a rank-one variable metric are suppressed by a parallel implementation. Additionally, pulsar algorithms might use a parametric imbedding into a family of regularized problems in order to counteract possible effects of ill-conditioning. Such parallel algorithms result not only in an increased speed of solving a problem but also in an increased robustness with respect to various sources of complexity of the problem. Necessary theoretical foundations, outlines of various variants of parallel algorithms and the results of preliminary tests are presented

    An Implicit-Function Theorem for B-Differentiable Functions

    Get PDF
    A function from one normed linear space to another is said to be Bouligand differentiable (B-differentiable) at a point if it is directionally differentiable there in every direction, and if the directional derivative has a certain uniformity property. This is a weakening of the classical idea of Frechet (F-) differentiability, and it is useful in dealing with optimization problems and in other situations in which F-differentiability may be too strong. In this paper we introduce a concept of strong B-derivative, and we employ this idea to prove an implicit-function theorem for B-differentiable functions. This theorem provides the same kinds of information as does the classical implicit-function theorem, but with B-differentiability in place of F-differentiability. Therefore it is applicable to a considerably wider class of functions than is the classical theorem
    corecore