6,139 research outputs found

    A Parametric Non-Convex Decomposition Algorithm for Real-Time and Distributed NMPC

    Get PDF
    A novel decomposition scheme to solve parametric non-convex programs as they arise in Nonlinear Model Predictive Control (NMPC) is presented. It consists of a fixed number of alternating proximal gradient steps and a dual update per time step. Hence, the proposed approach is attractive in a real-time distributed context. Assuming that the Nonlinear Program (NLP) is semi-algebraic and that its critical points are strongly regular, contraction of the sequence of primal-dual iterates is proven, implying stability of the sub-optimality error, under some mild assumptions. Moreover, it is shown that the performance of the optimality-tracking scheme can be enhanced via a continuation technique. The efficacy of the proposed decomposition method is demonstrated by solving a centralised NMPC problem to control a DC motor and a distributed NMPC program for collaborative tracking of unicycles, both within a real-time framework. Furthermore, an analysis of the sub-optimality error as a function of the sampling period is proposed given a fixed computational power.Comment: 16 pages, 9 figure

    Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems

    Get PDF
    The present article presents a summarizing view at differential-algebraic equations (DAEs) and analyzes how new application fields and corresponding mathematical models lead to innovations both in theory and in numerical analysis for this problem class. Recent numerical methods for nonsmooth dynamical systems subject to unilateral contact and friction illustrate the topicality of this development.Comment: Preprint of Book Chapte

    Semi-Global Exponential Stability of Augmented Primal-Dual Gradient Dynamics for Constrained Convex Optimization

    Full text link
    Primal-dual gradient dynamics that find saddle points of a Lagrangian have been widely employed for handling constrained optimization problems. Building on existing methods, we extend the augmented primal-dual gradient dynamics (Aug-PDGD) to incorporate general convex and nonlinear inequality constraints, and we establish its semi-global exponential stability when the objective function is strongly convex. We also provide an example of a strongly convex quadratic program of which the Aug-PDGD fails to achieve global exponential stability. Numerical simulation also suggests that the exponential convergence rate could depend on the initial distance to the KKT point

    Discrete mechanics and optimal control: An analysis

    Get PDF
    The optimal control of a mechanical system is of crucial importance in many application areas. Typical examples are the determination of a time-minimal path in vehicle dynamics, a minimal energy trajectory in space mission design, or optimal motion sequences in robotics and biomechanics. In most cases, some sort of discretization of the original, infinite-dimensional optimization problem has to be performed in order to make the problem amenable to computations. The approach proposed in this paper is to directly discretize the variational description of the system's motion. The resulting optimization algorithm lets the discrete solution directly inherit characteristic structural properties from the continuous one like symmetries and integrals of the motion. We show that the DMOC (Discrete Mechanics and Optimal Control) approach is equivalent to a finite difference discretization of Hamilton's equations by a symplectic partitioned Runge-Kutta scheme and employ this fact in order to give a proof of convergence. The numerical performance of DMOC and its relationship to other existing optimal control methods are investigated

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page
    corecore