20 research outputs found

    On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

    Get PDF
    Within the framework of affine scaling trust-region methods for bound constrained problems, we discuss the use of a inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model. Focusing on bound-constrained systems of nonlinear equations, an inexact affine scaling method for large scale problems, employing the inexact dogleg procedure, is described. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are also given

    Constrained dogleg methods for nonlinear systems with simple bounds

    Get PDF
    We focus on the numerical solution of medium scale bound-constrained systems of nonlinear equations. In this context, we consider an affine-scaling trust region approach that allows a great flexibility in choosing the scaling matrix used to handle the bounds. The method is based on a dogleg procedure tailored for constrained problems and so, it is named Constrained Dogleg method. It generates only strictly feasible iterates. Global and locally fast convergence is ensured under standard assumptions. The method has been implemented in the Matlab solver CoDoSol that supports several diagonal scalings in both spherical and elliptical trust region frameworks. We give a brief account of CoDoSol and report on the computational experience performed on a number of representative test problem

    Morceaux Choisis en Optimisation Continue et sur les Systèmes non Lisses

    Get PDF
    MasterThis course starts with the presentation of the optimality conditions of an optimization problem described in a rather abstract manner, so that these can be useful for dealing with a large variety of problems. Next, the course describes and analyzes various advanced algorithms to solve optimization problems (nonsmooth methods, linearization methods, proximal and augmented Lagrangian methods, interior point methods) and shows how they can be used to solve a few classical optimization problems (linear optimization, convex quadratic optimization, semidefinite optimization (SDO), nonlinear optimization). Along the way, various tools from convex and nonsmooth analysis will be presented. Everything is conceptualized in finite dimension. The goal of the lectures is therefore to consolidate basic knowledge in optimization, on both theoretical and algorithmic aspects

    Primal-Dual Active-Set Methods for Convex Quadratic Optimization with Applications

    Get PDF
    Primal-dual active-set (PDAS) methods are developed for solving quadratic optimization problems (QPs). Such problems arise in their own right in optimal control and statistics–two applications of interest considered in this dissertation–and as subproblems when solving nonlinear optimization problems. PDAS methods are promising as they possess the same favorable properties as other active-set methods, such as their ability to be warm-started and to obtain highly accurate solutions by explicitly identifying sets of constraints that are active at an optimal solution. However, unlike traditional active-set methods, PDAS methods have convergence guarantees despite making rapid changes in active-set estimates, making them well suited for solving large-scale problems.Two PDAS variants are proposed for efficiently solving generally-constrained convex QPs. Both variants ensure global convergence of the iterates by enforcing montonicity in a measure of progress. Besides identifying an estimate set estimate, a novel uncertain set is introduced into the framework in order to house indices of variables that have been identified as being susceptible to cycling. The introduction of the uncertainty set guarantees convergence of the algorithm, and with techniques proposed to keep the set from expanding quickly, the practical performance of the algorithm is shown to be very efficient. Another PDAS variant is proposed for solving certain convex QPs that commonly arise when discretizing optimal control problems. The proposed framework allows inexactness in the subproblem solutions, which can significantly reduce computational cost in large-scale settings. By controlling the level inexactness either by exploiting knowledge of an upper bound of a matrix inverse or by dynamic estimation of such a value, the method achieves convergence guarantees and is shown to outperform a method that employs exact solutions computed by direct factorization techniques.Finally, the application of PDAS techniques for applications in statistics, variants are proposed for solving isotonic regression (IR) and trend filtering (TR) problems. It is shown that PDAS can solve an IR problem with n data points with only O(n) arithmetic operations. Moreover, the method is shown to outperform the state-of-the-art method for solving IR problems, especially when warm-starting is considered. Enhancements to themethod are proposed for solving general TF problems, and numerical results are presented to show that PDAS methods are viable for a broad class of such problems
    corecore