56,975 research outputs found

    Quasi Newton methods for bound constrained problems

    Get PDF
    Diese Arbeit befasst sich mit Methoden zur Lösung von hochdimensionalen Optimierungsproblemen mit einfachen Schranken. Probleme dieser Art tauchen in einer grossen Anzahl von unterschiedlichen Anwendungen auf und spielen eine entscheidende Rolle in einigen Methoden zur Lösung von Optimierungsproblemen mit allgemeinen Nebenbedingungen, Variationsungleichungen und KomplementaritĂ€tsproblemen. Im ersten Teil dieser Arbeit beschreiben wir den allgemeinen mathematischen Hintergrund der Optimierungstheorie fĂŒr Optimierungsprobleme mit Schrankenbedingungen. Danach werden die nĂŒtzlichsten auf aktiven Mengen beruhenden Verfahren zur Lösung dieser Probleme diskutiert. Im zweiten Teil dieser Arbeit stellen wir ein neues Limited Memory Quasi Newton-Verfahren fĂŒr hochdimensionale und einfach eingeschrĂ€nkte Probleme vor. Der neue Algorithmus verwendet eine Kombination aus den steilsten Abstiegsrichtungen und Quasi Newton Richtungen, um die Menge der optimalen aktiven Variablen zu identifizieren. Die Quasi Newton-Richtungen werden mit Hilfe der Limited Memory SR1 Matrizen und, falls erforderlich, durch die Anwendung einer Regularisierung berechnet. Zum Schluss prĂ€sentieren wir die Ergebnisse von numerischen Experimenten, die die relative Performance unseres Algorithmuses bezĂŒglich unterschiedlicher Parametereinstellungen und im Vergleich mit einem anderen Algorithmus darstellen.In this thesis, we are concerned with methods for solving large scale bound constrained optimization problems. This kind of problems appears in a wide range of applications and plays a crucial role in some methods for solving general constrained optimization problems, variational inequalities and complementarity problems. In the first part, we provide the general mathematical background of optimization theory for bound constrained problems. Then the most useful methods for solving these problems based on the active set strategy are discussed. In second part of this thesis, we introduce a new limited memory quasi Newton method for bound constrained problems. The new algorithm uses a combination of the steepest decent directions and quasi Newton directions to identify the optimal active bound constraints. The quasi Newton directions are computed using limited memory SR1 matrices and, if needed, by applying regularization. At the end, we present results of numerical experiments showing the relative performance of our algorithm in different parameter settings and in comparison with another algorithm

    Nonlinear system modeling based on constrained Volterra series estimates

    Full text link
    A simple nonlinear system modeling algorithm designed to work with limited \emph{a priori }knowledge and short data records, is examined. It creates an empirical Volterra series-based model of a system using an lql_{q}-constrained least squares algorithm with q≄1q\geq 1. If the system m(⋅)m\left( \cdot \right) is a continuous and bounded map with a finite memory no longer than some known τ\tau, then (for a DD parameter model and for a number of measurements NN) the difference between the resulting model of the system and the best possible theoretical one is guaranteed to be of order N−1ln⁥D\sqrt{N^{-1}\ln D}, even for D≄ND\geq N. The performance of models obtained for q=1,1.5q=1,1.5 and 22 is tested on the Wiener-Hammerstein benchmark system. The results suggest that the models obtained for q>1q>1 are better suited to characterize the nature of the system, while the sparse solutions obtained for q=1q=1 yield smaller error values in terms of input-output behavior

    A Simple and Efficient Algorithm for Nonlinear Model Predictive Control

    Full text link
    We present PANOC, a new algorithm for solving optimal control problems arising in nonlinear model predictive control (NMPC). A usual approach to this type of problems is sequential quadratic programming (SQP), which requires the solution of a quadratic program at every iteration and, consequently, inner iterative procedures. As a result, when the problem is ill-conditioned or the prediction horizon is large, each outer iteration becomes computationally very expensive. We propose a line-search algorithm that combines forward-backward iterations (FB) and Newton-type steps over the recently introduced forward-backward envelope (FBE), a continuous, real-valued, exact merit function for the original problem. The curvature information of Newton-type methods enables asymptotic superlinear rates under mild assumptions at the limit point, and the proposed algorithm is based on very simple operations: access to first-order information of the cost and dynamics and low-cost direct linear algebra. No inner iterative procedure nor Hessian evaluation is required, making our approach computationally simpler than SQP methods. The low-memory requirements and simple implementation make our method particularly suited for embedded NMPC applications

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115

    A quasi-Newton proximal splitting method

    Get PDF
    A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification
    • 

    corecore