2,933 research outputs found

    Robust Model Predictive Control via Scenario Optimization

    Full text link
    This paper discusses a novel probabilistic approach for the design of robust model predictive control (MPC) laws for discrete-time linear systems affected by parametric uncertainty and additive disturbances. The proposed technique is based on the iterated solution, at each step, of a finite-horizon optimal control problem (FHOCP) that takes into account a suitable number of randomly extracted scenarios of uncertainty and disturbances, followed by a specific command selection rule implemented in a receding horizon fashion. The scenario FHOCP is always convex, also when the uncertain parameters and disturbance belong to non-convex sets, and irrespective of how the model uncertainty influences the system's matrices. Moreover, the computational complexity of the proposed approach does not depend on the uncertainty/disturbance dimensions, and scales quadratically with the control horizon. The main result in this paper is related to the analysis of the closed loop system under receding-horizon implementation of the scenario FHOCP, and essentially states that the devised control law guarantees constraint satisfaction at each step with some a-priori assigned probability p, while the system's state reaches the target set either asymptotically, or in finite time with probability at least p. The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in the IEEE Transactions on Automatic Control, with DOI: 10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of record will be available at http://ieeexplore.ieee.or

    On control of discrete-time state-dependent jump linear systems with probabilistic constraints: A receding horizon approach

    Full text link
    In this article, we consider a receding horizon control of discrete-time state-dependent jump linear systems, particular kind of stochastic switching systems, subject to possibly unbounded random disturbances and probabilistic state constraints. Due to a nature of the dynamical system and the constraints, we consider a one-step receding horizon. Using inverse cumulative distribution function, we convert the probabilistic state constraints to deterministic constraints, and obtain a tractable deterministic receding horizon control problem. We consider the receding control law to have a linear state-feedback and an admissible offset term. We ensure mean square boundedness of the state variable via solving linear matrix inequalities off-line, and solve the receding horizon control problem on-line with control offset terms. We illustrate the overall approach applied on a macroeconomic system

    Stability for Receding-horizon Stochastic Model Predictive Control

    Full text link
    A stochastic model predictive control (SMPC) approach is presented for discrete-time linear systems with arbitrary time-invariant probabilistic uncertainties and additive Gaussian process noise. Closed-loop stability of the SMPC approach is established by appropriate selection of the cost function. Polynomial chaos is used for uncertainty propagation through system dynamics. The performance of the SMPC approach is demonstrated using the Van de Vusse reactions.Comment: American Control Conference (ACC) 201

    GPU Based Path Integral Control with Learned Dynamics

    Full text link
    We present an algorithm which combines recent advances in model based path integral control with machine learning approaches to learning forward dynamics models. We take advantage of the parallel computing power of a GPU to quickly take a massive number of samples from a learned probabilistic dynamics model, which we use to approximate the path integral form of the optimal control. The resulting algorithm runs in a receding-horizon fashion in realtime, and is subject to no restrictive assumptions about costs, constraints, or dynamics. A simple change to the path integral control formulation allows the algorithm to take model uncertainty into account during planning, and we demonstrate its performance on a quadrotor navigation task. In addition to this novel adaptation of path integral control, this is the first time that a receding-horizon implementation of iterative path integral control has been run on a real system.Comment: 6 pages, NIPS 2014 - Autonomously Learning Robots Worksho

    Control with Probabilistic Signal Temporal Logic

    Full text link
    Autonomous agents often operate in uncertain environments where their decisions are made based on beliefs over states of targets. We are interested in controller synthesis for complex tasks defined over belief spaces. Designing such controllers is challenging due to computational complexity and the lack of expressivity of existing specification languages. In this paper, we propose a probabilistic extension to signal temporal logic (STL) that expresses tasks over continuous belief spaces. We present an efficient synthesis algorithm to find a control input that maximises the probability of satisfying a given task. We validate our algorithm through simulations of an unmanned aerial vehicle deployed for surveillance and search missions.Comment: 7 pages, submitted to the 2016 American Control Conference (ACC 2016) on September, 30, 2015 (under review

    Stochastic model predictive control of LPV systems via scenario optimization

    Get PDF
    A stochastic receding-horizon control approach for constrained Linear Parameter Varying discrete-time systems is proposed in this paper. It is assumed that the time-varying parameters have stochastic nature and that the system's matrices are bounded but otherwise arbitrary nonlinear functions of these parameters. No specific assumption on the statistics of the parameters is required. By using a randomization approach, a scenario-based finite-horizon optimal control problem is formulated, where only a finite number M of sampled predicted parameter trajectories (‘scenarios') are considered. This problem is convex and its solution is a priori guaranteed to be probabilistically robust, up to a user-defined probability level p. The p level is linked to M by an analytic relationship, which establishes a tradeoff between computational complexity and robustness of the solution. Then, a receding horizon strategy is presented, involving the iterated solution of a scenario-based finite-horizon control problem at each time step. Our key result is to show that the state trajectories of the controlled system reach a terminal positively invariant set in finite time, either deterministically, or with probability no smaller than p. The features of the approach are illustrated by a numerical example
    corecore