4,930 research outputs found

    NLP Solutions as Asymptotic Values of ODE Trajectories

    Full text link
    In this paper, it is shown that the solutions of general differentiable constrained optimization problems can be viewed as asymptotic solutions to sets of Ordinary Differential Equations (ODEs). The construction of the ODE associated to the optimization problem is based on an exact penalty formulation in which the weighting parameter dynamics is coordinated with that of the decision variable so that there is no need to solve a sequence of optimization problems, instead, a single ODE has to be solved using available efficient methods. Examples are given in order to illustrate the results. This includes a novel systematic approach to solve combinatoric optimization problems as well as fast computation of a class of optimization problems using analogic circuits leading to fast, parallel and highly scalable solutions

    A Convex Feasibility Approach to Anytime Model Predictive Control

    Full text link
    This paper proposes to decouple performance optimization and enforcement of asymptotic convergence in Model Predictive Control (MPC) so that convergence to a given terminal set is achieved independently of how much performance is optimized at each sampling step. By embedding an explicit decreasing condition in the MPC constraints and thanks to a novel and very easy-to-implement convex feasibility solver proposed in the paper, it is possible to run an outer performance optimization algorithm on top of the feasibility solver and optimize for an amount of time that depends on the available CPU resources within the current sampling step (possibly going open-loop at a given sampling step in the extreme case no resources are available) and still guarantee convergence to the terminal set. While the MPC setup and the solver proposed in the paper can deal with quite general classes of functions, we highlight the synthesis method and show numerical results in case of linear MPC and ellipsoidal and polyhedral terminal sets.Comment: 8 page

    Enlarging the domain of attraction of MPC controllers

    Get PDF
    This paper presents a method for enlarging the domain of attraction of nonlinear model predictive control (MPC). The usual way of guaranteeing stability of nonlinear MPC is to add a terminal constraint and a terminal cost to the optimization problem such that the terminal region is a positively invariant set for the system and the terminal cost is an associated Lyapunov function. The domain of attraction of the controller depends on the size of the terminal region and the control horizon. By increasing the control horizon, the domain of attraction is enlarged but at the expense of a greater computational burden, while increasing the terminal region produces an enlargement without an extra cost. In this paper, the MPC formulation with terminal cost and constraint is modified, replacing the terminal constraint by a contractive terminal constraint. This constraint is given by a sequence of sets computed off-line that is based on the positively invariant set. Each set of this sequence does not need to be an invariant set and can be computed by a procedure which provides an inner approximation to the one-step set. This property allows us to use one-step approximations with a trade off between accuracy and computational burden for the computation of the sequence. This strategy guarantees closed loop-stability ensuring the enlargement of the domain of attraction and the local optimality of the controller. Moreover, this idea can be directly translated to robust MPC.Ministerio de Ciencia y Tecnología DPI2002-04375-c03-0

    Asymptotic Stability of POD based Model Predictive Control for a semilinear parabolic PDE

    Get PDF
    In this article a stabilizing feedback control is computed for a semilinear parabolic partial differential equation utilizing a nonlinear model predictive (NMPC) method. In each level of the NMPC algorithm the finite time horizon open loop problem is solved by a reduced-order strategy based on proper orthogonal decomposition (POD). A stability analysis is derived for the combined POD-NMPC algorithm so that the lengths of the finite time horizons are chosen in order to ensure the asymptotic stability of the computed feedback controls. The proposed method is successfully tested by numerical examples

    Efficient Optimization of Loops and Limits with Randomized Telescoping Sums

    Full text link
    We consider optimization problems in which the objective requires an inner loop with many steps or is the limit of a sequence of increasingly costly approximations. Meta-learning, training recurrent neural networks, and optimization of the solutions to differential equations are all examples of optimization problems with this character. In such problems, it can be expensive to compute the objective function value and its gradient, but truncating the loop or using less accurate approximations can induce biases that damage the overall solution. We propose randomized telescope (RT) gradient estimators, which represent the objective as the sum of a telescoping series and sample linear combinations of terms to provide cheap unbiased gradient estimates. We identify conditions under which RT estimators achieve optimization convergence rates independent of the length of the loop or the required accuracy of the approximation. We also derive a method for tuning RT estimators online to maximize a lower bound on the expected decrease in loss per unit of computation. We evaluate our adaptive RT estimators on a range of applications including meta-optimization of learning rates, variational inference of ODE parameters, and training an LSTM to model long sequences
    • …
    corecore