21 research outputs found

    Gauss-Newton Runge-Kutta Integration for Efficient Discretization of Optimal Control Problems with Long Horizons and Least-Squares Costs

    Full text link
    This work proposes an efficient treatment of continuous-time optimal control problem (OCP) with long horizons and nonlinear least-squares costs. The Gauss-Newton Runge-Kutta (GNRK) integrator is presented which provides a high-order cost integration. Crucially, the Hessian of the cost terms required within an SQP-type algorithm is approximated with a Gauss-Newton Hessian. Moreover, L2 penalty formulations for constraints are shown to be particularly effective for optimization with GNRK. An efficient implementation of GNRK is provided in the open-source software framework acados. We demonstrate the effectiveness of the proposed approach and its implementation on an illustrative example showing a reduction of relative suboptimality by a factor greater than 10 while increasing the runtime by only 10 %.Comment: 7 pages, 3 Figures, submitted to ECC 202

    A dual-control effect preserving formulation for nonlinear output-feedback stochastic model predictive control with constraints

    Full text link
    In this paper we propose a formulation for approximate constrained nonlinear output-feedback stochastic model predictive control. Starting from the ideal but intractable stochastic optimal control problem (OCP), which involves the optimization over output-dependent policies, we use linearization with respect to the uncertainty to derive a tractable approximation which includes knowledge of the output model. This allows us to compute the expected value for the outer functions of the OCP exactly. Crucially, the dual control effect is preserved by this approximation. In consequence, the resulting controller is aware of how the choice of inputs affects the information available in the future which in turn influences subsequent controls. Thus, it can be classified as a form of implicit dual control

    Survey of sequential convex programming and generalized Gauss-Newton methods*

    Get PDF
    We provide an overview of a class of iterative convex approximation methods for nonlinear optimization problems with convex-over-nonlinear substructure. These problems are characterized by outer convexities on the one hand, and nonlinear, generally nonconvex, but differentiable functions on the other hand. All methods from this class use only first order derivatives of the nonlinear functions and sequentially solve convex optimization problems. All of them are different generalizations of the classical Gauss-Newton (GN) method. We focus on the smooth constrained case and on three methods to address it: Sequential Convex Programming (SCP), Sequential Convex Quadratic Programming (SCQP), and Sequential Quadratically Constrained Quadratic Programming (SQCQP). While the first two methods were previously known, the last is newly proposed and investigated in this paper. We show under mild assumptions that SCP, SCQP and SQCQP have exactly the same local linear convergence – or divergence – rate. We then discuss the special case in which the solution is fully determined by the active constraints, and show that for this case the KKT conditions are sufficient for local optimality and that SCP, SCQP and SQCQP even converge quadratically. In the context of parameter estimation with symmetric convex loss functions, the possible divergence of the methods can in fact be an advantage that helps them to avoid some undesirable local minima: generalizing existing results, we show that the presented methods converge to a local minimum if and only if this local minimum is stable against a mirroring operation applied to the measurement data of the estimation problem. All results are illustrated by numerical experiments on a tutorial example

    Genetic aberrations in enteropathy-type t-cell-lymphoma

    No full text
    Zur Charakterisierung der genetischen Aberrationen, welche für die Pathogenese von Enteropathie-Typ T-Zell-Lymphomen eine Rolle spielen, wurden 26 Tumoren mit 47 Mikrosatellitenmarkern untersucht. Dabei stellte sich die Amplifikation im Bereich 9q34, dem Genlokus für c-abl und Notch-1, als die häufigste Aberration heraus, welche sich bei 40% der informativen Genotypen nachweisen ließ. Weitere häufige Aberrationen fanden sich in 5q33-34, 7q31, 6p24, 7p21 und 17q23-25. Bei der Analyse des Verteilungsmuster der Aberrationen ließen sich die ETLs in 2 Untergruppen einteilen.To define genetic aberrations playing a role in enteropathy-type t-cell lymphoma (ETL), we examined 26 such tumours using a battery of 47 microsatellite markers. The most frequent aberration (seen in 40% of inforamtive genotypes) was amplification of 9q34 encompassing c-abl and Notch1-gene loci. Other frequent aberrations were detected in 5q33-34, 7q31, 6p24, 7p21 and 17q23-25. Analysis of the pattern of these aberrations revealed existence of two ETL subgroups

    Stability Analysis of Nonlinear Model Predictive Control With Progressive Tightening of Stage Costs and Constraints

    No full text
    We consider a stage-varying nonlinear model predictive control (NMPC) formulation and provide a stability result for the corresponding closed-loop system under the assumption that cost and constraints are progressively tightening. We illustrate the generality of the stage-varying formulation pointing out various approaches proposed in the literature that can be cast as stage-varying and progressively tightening optimal control problems.ISSN:2475-145

    Survey of sequential convex programming and generalized Gauss-Newton methods

    No full text
    We provide an overview of a class of iterative convex approximation methods for nonlinear optimization problems with convex-over-nonlinear substructure. These problems are characterized by outer convexities on the one hand, and nonlinear, generally nonconvex, but differentiable functions on the other hand. All methods from this class use only first order derivatives of the nonlinear functions and sequentially solve convex optimization problems. All of them are different generalizations of the classical Gauss-Newton (GN) method. We focus on the smooth constrained case and on three methods to address it: Sequential Convex Programming (SCP), Sequential Convex Quadratic Programming (SCQP), and Sequential Quadratically Constrained Quadratic Programming (SQCQP). While the first two methods were previously known, the last is newly proposed and investigated in this paper. We show under mild assumptions that SCP, SCQP and SQCQP have exactly the same local linear convergence – or divergence – rate. We then discuss the special case in which the solution is fully determined by the active constraints, and show that for this case the KKT conditions are sufficient for local optimality and that SCP, SCQP and SQCQP even converge quadratically. In the context of parameter estimation with symmetric convex loss functions, the possible divergence of the methods can in fact be an advantage that helps them to avoid some undesirable local minima: generalizing existing results, we show that the presented methods converge to a local minimum if and only if this local minimum is stable against a mirroring operation applied to the measurement data of the estimation problem. All results are illustrated by numerical experiments on a tutorial example
    corecore