30 research outputs found

    A second derivative SQP method: local convergence

    Get PDF
    In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud \ud Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud \ud Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set

    On a scalable nonparametric denoising of time series signals

    Get PDF
    Denoising and filtering of time series signals is a problem emerging in many areas of computational science. Here we demonstrate how the nonparametric computational methodology of the finite element method of time series analysis with H1 regularization can be extended for denoising of very long and noisy time series signals. The main computational bottleneck is the inner quadratic programming problem. Analyzing the solvability and utilizing the problem structure, we suggest an adapted version of the spectral projected gradient method (SPG-QP) to resolve the problem. This approach increases the granularity of parallelization, making the proposed methodology highly suitable for graphics processing unit (GPU) computing. We demonstrate the scalability of our open-source implementation based on PETSc for the Piz Daint supercomputer of the Swiss Supercomputing Centre (CSCS) by solving large-scale data denoising problems and comparing their computational scaling and performance to the performance of the standard denoising methods

    A Filter Algorithm with Inexact Line Search

    Get PDF
    A filter algorithm with inexact line search is proposed for solving nonlinear programming problems. The filter is constructed by employing the norm of the gradient of the Lagrangian function to the infeasibility measure. Transition to superlinear local convergence is showed for the proposed filter algorithm without second-order correction. Under mild conditions, the global convergence can also be derived. Numerical experiments show the efficiency of the algorithm

    Primal-Dual Active-Set Methods for Convex Quadratic Optimization with Applications

    Get PDF
    Primal-dual active-set (PDAS) methods are developed for solving quadratic optimization problems (QPs). Such problems arise in their own right in optimal control and statistics–two applications of interest considered in this dissertation–and as subproblems when solving nonlinear optimization problems. PDAS methods are promising as they possess the same favorable properties as other active-set methods, such as their ability to be warm-started and to obtain highly accurate solutions by explicitly identifying sets of constraints that are active at an optimal solution. However, unlike traditional active-set methods, PDAS methods have convergence guarantees despite making rapid changes in active-set estimates, making them well suited for solving large-scale problems.Two PDAS variants are proposed for efficiently solving generally-constrained convex QPs. Both variants ensure global convergence of the iterates by enforcing montonicity in a measure of progress. Besides identifying an estimate set estimate, a novel uncertain set is introduced into the framework in order to house indices of variables that have been identified as being susceptible to cycling. The introduction of the uncertainty set guarantees convergence of the algorithm, and with techniques proposed to keep the set from expanding quickly, the practical performance of the algorithm is shown to be very efficient. Another PDAS variant is proposed for solving certain convex QPs that commonly arise when discretizing optimal control problems. The proposed framework allows inexactness in the subproblem solutions, which can significantly reduce computational cost in large-scale settings. By controlling the level inexactness either by exploiting knowledge of an upper bound of a matrix inverse or by dynamic estimation of such a value, the method achieves convergence guarantees and is shown to outperform a method that employs exact solutions computed by direct factorization techniques.Finally, the application of PDAS techniques for applications in statistics, variants are proposed for solving isotonic regression (IR) and trend filtering (TR) problems. It is shown that PDAS can solve an IR problem with n data points with only O(n) arithmetic operations. Moreover, the method is shown to outperform the state-of-the-art method for solving IR problems, especially when warm-starting is considered. Enhancements to themethod are proposed for solving general TF problems, and numerical results are presented to show that PDAS methods are viable for a broad class of such problems
    corecore