44 research outputs found

    An Adaptive Method for Minimizing a Sum of Squares of Nonlinear Functions

    Get PDF
    The Gauss-Newton and the Levenberg-Marquardt algorithms for solving nonlinear least squares problems, minimize F(x) = sum_i=1^m (f_i(x))^2 for x in R^n, are both based upon the premise that one term in the Hessian of F(x) dominates its other terms, and that the Hessian may be approximated by this dominant term J^T J, where J_ij = ( delta f_i / delta x_j ). We are motivated here by the need for an algorithm which works well when applied to problems for which this premise is substantially violated, and is yet able to take advantage of situations where the premise holds. We describe and justify a method for approximating the Hessian of F ( x ) which uses a convex combination of J^T J and a matrix obtained by making quasi-Newton updates. In order to evaluate the usefulness of this idea, we construct a nonlinear least squares algorithm which uses this Hessian approximation, and report test results obtained by applying it to a set of test problems. A merit of our approach is that it demonstrates how a single adaptive algorithm can be used to efficiently solve unconstrained nonlinear optimization problems (whose Hessians have no particular structure), small residual and large residual, nonlinear least squares problems. Our paper can also be looked upon as an investigation for one problem area, of the following more general question: how can one combine two different Hessian approximations (or model functions) which are simultaneously available? The technique suggested here may thus be more widely applicable and may be of use, for example, when minimizing functions which are only partly composed of sums of squares arising in penalty function methods

    Analogues of Dixon's and Powell's Theorems for Unconstrained Minimization with Inexact Line Searches

    Get PDF
    By modifying the way in which search directions are defined, we show how to relax the restrictive assumption that line searches must be exact in the theorems of Dixon and Powell. We show also that the BFGS algorithm modified in this way is equivalent to the three-term-recurrence (TTR) method for quadratic functions

    Design and Implementation of a Stochastic Programming Optimizer with Recourse and Tenders

    Get PDF
    This paper serves two purposes, to which we give equal emphasis. First, it describes an optimization system for solving large-scale stochastic linear programs with simple (i.e. decision-free in the second stage) recourse and stochastic right-hand-side elements. Second, it is a study of the means whereby large-scale Mathematical Programming Systems may be readily extended to handle certain forms of uncertainty, through post-optimal options akin to sensitivity on parametric analysis, which we term "recourse analysis". This latter theme (implicit throughout the paper) is explored in a proselytizing manner, in the concluding section

    Implementation Aids for Optimization Algorithms that Solve Sequences of Linear Programs by the Revised Simplex Method

    Get PDF
    We describe a collection of subroutines designed a) to facilitate the implementation of algorithms that are based upon linear programming, b) to serve as a tutorial on the development of such implementations. We make this collection the basis for a discussion of some of the broader issues of software development

    Variants on Dantzig-Wolfe Decomposition with Applications to Multistage Problems

    Get PDF
    The initial representation of an LP problem to which the Dantzig-Wolfe decomposition procedure is applied, is of the essence. We study this here, and, in particular, we consider two transformations of the problem, by introducing suitable linking rows and variables. We study the application of the Dantzig-Wolfe procedure to these new representations of the original problem and the relationship to previously proposed algorithms. Advantages and disadvantages from a computational viewpoint are discussed. Finally we develop a decomposition algorithm based upon these ideas for solving multistage staircase-structured LP problems

    Algorithms Based upon Generalized Linear Programming for Stochastic Programs with Recourse

    Get PDF
    In this paper, the author discusses solution algorithms for a particular form of two-stage stochastic linear programs with recourse. The algorithms considered are based upon the generalized linear programming method of Wolfe. The author first gives an alternative formulation of the original problem and uses this to examine the relation between tenders and certainty equivalents. He then considers problems with simple recourse, discussing algorithms for two cases: (a) when the distribution is discrete and probabilities are known explicitly; (b) when the probability distribution is other than discrete or when it is only known implicitly through some simulation model. The latter case is especially useful because it makes possible the transition to general recourse. Some possible solution strategies based upon generalized programming for general recourse problems are then discussed. This paper is a product of the Adaptation and Optimization Project within the System and Decision Sciences Program

    Combining Generalized Programming and Sampling Techniques for Stochastic Programs with Recourse

    Get PDF
    This paper deals with an application of generalized linear programming techniques for stochastic programming problems, particularly to stochastic programming problems with recourse. The major points which needed a clarification here were the possibility to use the estimates of the objective function instead of the exact values and to use the approximate solutions of the dual subproblem instead of the exact ones. In this paper conditions are presented which allow to use estimates and approximate solutions and still maintain convergence. The paper is a part of the effort on the development of stochastic optimization techniques at the Adaptation and Optimization Project of the System and Decision Sciences Program

    Nonlinear Programming Techniques Applied to Stochastic Programs with Recourse

    Get PDF
    Stochastic convex programs with recourse can equivalently be formulated as nonlinear convex programming problems. These possess some rather marked characteristics. Firstly, the proportion of linear to nonlinear variables is often large and leads to a natural partition of the constraints and objective. Secondly, the objective function corresponding to the nonlinear variables can vary over a wide range of possibilities; under appropriate assumptions about the underlying stochastic program it could be, for example, a smooth function, a separable polyhedral function or a nonsmooth function whose values and gradients are very expensive to compute. Thirdly, the problems are often large-scale and linearly constrained with special structure in the constraints. This paper is a comprehensive study of solution methods for stochastic programs with recourse viewed from the above standpoint. We describe a number of promising algorithmic approaches that are derived from methods of nonlinear programming. The discussion is a fairly general one, but the solution of two classes of stochastic programs with recourse are of particular interest. The first corresponds to stochastic linear programs with simple recourse and stochastic right-hand-side elements with given discrete probability distribution. The second corresponds to stochastic linear programs with complete recourse and stochastic right-hand-side vectors defined by a limited number of scenarios, each with given probability. A repeated theme is the use of the MINOS code of Murtagh and Saunders as a basis for developing suitable implementations

    Algorithms for Stochastic Programs: The Case for Nonstochastic Tenders

    Get PDF
    We consider solution strategies for stochastic programs whose deterministic equivalent programs take on one specific form. We suggest algorithms based upon (i) extensions of the revised simplex method, (ii) inner approximations (generalized programming techniques), (iii) outer approximations (min-max strategies). We briefly discuss implementation and associated software considerations

    An Alternative Variational Principle for Variable Metric Updating

    Get PDF
    We describe a variational principle based upon minimizing the extent to which the inverse hessian approximation, say H, violates the quasi-Newton relation, on the step immediately prior to the step used to construct H. It suggests use of the BFGS update
    corecore