9,419 research outputs found

    Robust Stability of Suboptimal Moving Horizon Estimation using an Observer-Based Candidate Solution

    Get PDF
    In this paper, we propose a suboptimal moving horizon estimator for nonlinear systems. For the stability analysis we transfer the "feasibility-implies-stability/robustness" paradigm from model predictive control to the context of moving horizon estimation in the following sense: Using a suitably defined, feasible candidate solution based on the trajectory of an auxiliary observer, robust stability of the proposed suboptimal estimator is inherited independently of the horizon length and even if no optimization is performed.Comment: This work has been submitted to IFAC for possible publicatio

    Towards parallelizable sampling-based Nonlinear Model Predictive Control

    Full text link
    This paper proposes a new sampling-based nonlinear model predictive control (MPC) algorithm, with a bound on complexity quadratic in the prediction horizon N and linear in the number of samples. The idea of the proposed algorithm is to use the sequence of predicted inputs from the previous time step as a warm start, and to iteratively update this sequence by changing its elements one by one, starting from the last predicted input and ending with the first predicted input. This strategy, which resembles the dynamic programming principle, allows for parallelization up to a certain level and yields a suboptimal nonlinear MPC algorithm with guaranteed recursive feasibility, stability and improved cost function at every iteration, which is suitable for real-time implementation. The complexity of the algorithm per each time step in the prediction horizon depends only on the horizon, the number of samples and parallel threads, and it is independent of the measured system state. Comparisons with the fmincon nonlinear optimization solver on benchmark examples indicate that as the simulation time progresses, the proposed algorithm converges rapidly to the "optimal" solution, even when using a small number of samples.Comment: 9 pages, 9 pictures, submitted to IFAC World Congress 201

    Stochastic Model Predictive Control with Discounted Probabilistic Constraints

    Full text link
    This paper considers linear discrete-time systems with additive disturbances, and designs a Model Predictive Control (MPC) law to minimise a quadratic cost function subject to a chance constraint. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalising violation probabilities close to the initial time and ignoring violation probabilities in the far future, this form of constraint enables the feasibility of the online optimisation to be guaranteed without an assumption of boundedness of the disturbance. A computationally convenient MPC optimisation problem is formulated using Chebyshev's inequality and we introduce an online constraint-tightening technique to ensure recursive feasibility based on knowledge of a suboptimal solution. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition.Comment: 6 pages, Conference Proceeding

    Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems

    Full text link
    In this paper we propose and analyze two dual methods based on inexact gradient information and averaging that generate approximate primal solutions for smooth convex optimization problems. The complicating constraints are moved into the cost using the Lagrange multipliers. The dual problem is solved by inexact first order methods based on approximate gradients and we prove sublinear rate of convergence for these methods. In particular, we provide, for the first time, estimates on the primal feasibility violation and primal and dual suboptimality of the generated approximate primal and dual solutions. Moreover, we solve approximately the inner problems with a parallel coordinate descent algorithm and we show that it has linear convergence rate. In our analysis we rely on the Lipschitz property of the dual function and inexact dual gradients. Further, we apply these methods to distributed model predictive control for network systems. By tightening the complicating constraints we are also able to ensure the primal feasibility of the approximate solutions generated by the proposed algorithms. We obtain a distributed control strategy that has the following features: state and input constraints are satisfied, stability of the plant is guaranteed, whilst the number of iterations for the suboptimal solution can be precisely determined.Comment: 26 pages, 2 figure

    Cooperative distributed MPC for tracking

    Get PDF
    This paper proposes a cooperative distributed linear model predictive control (MPC) strategy for tracking changing setpoints, applicable to any finite number of subsystems. The proposed controller is able to drive the whole system to any admissible setpoint in an admissible way, ensuring feasibility under any change of setpoint. It also provides a larger domain of attraction than standard distributed MPC for regulation, due to the particular terminal constraint. Moreover, the controller ensures convergence to the centralized optimum, even in the case of coupled constraints. This is possible thanks to the warm start used to initialize the optimization Algorithm, and to the design of the cost function, which integrates a Steady-State Target Optimizer (SSTO). The controller is applied to a real four-tank plant

    A Parallel Dual Fast Gradient Method for MPC Applications

    Full text link
    We propose a parallel adaptive constraint-tightening approach to solve a linear model predictive control problem for discrete-time systems, based on inexact numerical optimization algorithms and operator splitting methods. The underlying algorithm first splits the original problem in as many independent subproblems as the length of the prediction horizon. Then, our algorithm computes a solution for these subproblems in parallel by exploiting auxiliary tightened subproblems in order to certify the control law in terms of suboptimality and recursive feasibility, along with closed-loop stability of the controlled system. Compared to prior approaches based on constraint tightening, our algorithm computes the tightening parameter for each subproblem to handle the propagation of errors introduced by the parallelization of the original problem. Our simulations show the computational benefits of the parallelization with positive impacts on performance and numerical conditioning when compared with a recent nonparallel adaptive tightening scheme.Comment: This technical report is an extended version of the paper "A Parallel Dual Fast Gradient Method for MPC Applications" by the same authors submitted to the 54th IEEE Conference on Decision and Contro
    • …
    corecore