41,053 research outputs found

    Robust MPC for actuator-fault tolerance using set-based passive fault detection and active fault isolation

    Get PDF
    In this paper, an actuator fault-tolerant control (FTC) scheme is proposed, which is based on tube-based model predictive control (MPC) and set-theoretic fault detection and isolation (FDI). As a robust MPC technique, tube-based MPC, can effectively deal with system constraints and uncertainties with relatively low computational complexity. Set-based FDI can robustly detect and isolate actuator faults. Here, fault detection (FD) is passive by invariant sets, while fault isolation (FI) is active by tubes. Using the constraint-handling ability of MPC controllers, an active FI approach is implemented. A numerical example illustrates the effectiveness of the proposed approach.Postprint (author’s final draft

    Robust receding horizon control for convex dynamics and bounded disturbances

    Full text link
    A novel robust nonlinear model predictive control strategy is proposed for systems with convex dynamics and convex constraints. Using a sequential convex approximation approach, the scheme constructs tubes that contain predicted trajectories, accounting for approximation errors and disturbances, and guaranteeing constraint satisfaction. An optimal control problem is solved as a sequence of convex programs, without the need of pre-computed error bounds. We develop the scheme initially in the absence of external disturbances and show that the proposed nominal approach is non-conservative, with the solutions of successive convex programs converging to a locally optimal solution for the original optimal control problem. We extend the approach to the case of additive disturbances using a novel strategy for selecting linearization points and seed trajectories. As a result we formulate a robust receding horizon strategy with guarantees of recursive feasibility and stability of the closed-loop system

    An Improved Constraint-Tightening Approach for Stochastic MPC

    Full text link
    The problem of achieving a good trade-off in Stochastic Model Predictive Control between the competing goals of improving the average performance and reducing conservativeness, while still guaranteeing recursive feasibility and low computational complexity, is addressed. We propose a novel, less restrictive scheme which is based on considering stability and recursive feasibility separately. Through an explicit first step constraint we guarantee recursive feasibility. In particular we guarantee the existence of a feasible input trajectory at each time instant, but we only require that the input sequence computed at time kk remains feasible at time k+1k+1 for most disturbances but not necessarily for all, which suffices for stability. To overcome the computational complexity of probabilistic constraints, we propose an offline constraint-tightening procedure, which can be efficiently solved via a sampling approach to the desired accuracy. The online computational complexity of the resulting Model Predictive Control (MPC) algorithm is similar to that of a nominal MPC with terminal region. A numerical example, which provides a comparison with classical, recursively feasible Stochastic MPC and Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201

    Distributed Model Predictive Control Using a Chain of Tubes

    Full text link
    A new distributed MPC algorithm for the regulation of dynamically coupled subsystems is presented in this paper. The current control action is computed via two robust controllers working in a nested fashion. The inner controller builds a nominal reference trajectory from a decentralized perspective. The outer controller uses this information to take into account the effects of the coupling and generate a distributed control action. The tube-based approach to robustness is employed. A supplementary constraint is included in the outer optimization problem to provide recursive feasibility of the overall controllerComment: Accepted for presentation at the UKACC CONTROL 2016 conference (Belfast, UK

    Dynamic Tube MPC for Nonlinear Systems

    Full text link
    Modeling error or external disturbances can severely degrade the performance of Model Predictive Control (MPC) in real-world scenarios. Robust MPC (RMPC) addresses this limitation by optimizing over feedback policies but at the expense of increased computational complexity. Tube MPC is an approximate solution strategy in which a robust controller, designed offline, keeps the system in an invariant tube around a desired nominal trajectory, generated online. Naturally, this decomposition is suboptimal, especially for systems with changing objectives or operating conditions. In addition, many tube MPC approaches are unable to capture state-dependent uncertainty due to the complexity of calculating invariant tubes, resulting in overly-conservative approximations. This work presents the Dynamic Tube MPC (DTMPC) framework for nonlinear systems where both the tube geometry and open-loop trajectory are optimized simultaneously. By using boundary layer sliding control, the tube geometry can be expressed as a simple relation between control parameters and uncertainty bound; enabling the tube geometry dynamics to be added to the nominal MPC optimization with minimal increase in computational complexity. In addition, DTMPC is able to leverage state-dependent uncertainty to reduce conservativeness and improve optimization feasibility. DTMPC is demonstrated to robustly perform obstacle avoidance and modify the tube geometry in response to obstacle proximity

    Robust Model Predictive Control via Scenario Optimization

    Full text link
    This paper discusses a novel probabilistic approach for the design of robust model predictive control (MPC) laws for discrete-time linear systems affected by parametric uncertainty and additive disturbances. The proposed technique is based on the iterated solution, at each step, of a finite-horizon optimal control problem (FHOCP) that takes into account a suitable number of randomly extracted scenarios of uncertainty and disturbances, followed by a specific command selection rule implemented in a receding horizon fashion. The scenario FHOCP is always convex, also when the uncertain parameters and disturbance belong to non-convex sets, and irrespective of how the model uncertainty influences the system's matrices. Moreover, the computational complexity of the proposed approach does not depend on the uncertainty/disturbance dimensions, and scales quadratically with the control horizon. The main result in this paper is related to the analysis of the closed loop system under receding-horizon implementation of the scenario FHOCP, and essentially states that the devised control law guarantees constraint satisfaction at each step with some a-priori assigned probability p, while the system's state reaches the target set either asymptotically, or in finite time with probability at least p. The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in the IEEE Transactions on Automatic Control, with DOI: 10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of record will be available at http://ieeexplore.ieee.or
    • …
    corecore