10,907 research outputs found

    Robust Temporal Logic Model Predictive Control

    Full text link
    Control synthesis from temporal logic specifications has gained popularity in recent years. In this paper, we use a model predictive approach to control discrete time linear systems with additive bounded disturbances subject to constraints given as formulas of signal temporal logic (STL). We introduce a (conservative) computationally efficient framework to synthesize control strategies based on mixed integer programs. The designed controllers satisfy the temporal logic requirements, are robust to all possible realizations of the disturbances, and optimal with respect to a cost function. In case the temporal logic constraint is infeasible, the controller satisfies a relaxed, minimally violating constraint. An illustrative case study is included.Comment: This work has been accepted to appear in the proceedings of 53rd Annual Allerton Conference on Communication, Control and Computing, Urbana-Champaign, IL (2015

    Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling

    Full text link
    Tight performance specifications in combination with operational constraints make model predictive control (MPC) the method of choice in various industries. As the performance of an MPC controller depends on a sufficiently accurate objective and prediction model of the process, a significant effort in the MPC design procedure is dedicated to modeling and identification. Driven by the increasing amount of available system data and advances in the field of machine learning, data-driven MPC techniques have been developed to facilitate the MPC controller design. While these methods are able to leverage available data, they typically do not provide principled mechanisms to automatically trade off exploitation of available data and exploration to improve and update the objective and prediction model. To this end, we present a learning-based MPC formulation using posterior sampling techniques, which provides finite-time regret bounds on the learning performance while being simple to implement using off-the-shelf MPC software and algorithms. The performance analysis of the method is based on posterior sampling theory and its practical efficiency is illustrated using a numerical example of a highly nonlinear dynamical car-trailer system

    Intermittent predictive control of an inverted pendulum

    Get PDF
    Intermittent predictive pole-placement control is successfully applied to the constrained-state control of a prestabilised experimental inverted pendulum

    Robust Model Predictive Control via Scenario Optimization

    Full text link
    This paper discusses a novel probabilistic approach for the design of robust model predictive control (MPC) laws for discrete-time linear systems affected by parametric uncertainty and additive disturbances. The proposed technique is based on the iterated solution, at each step, of a finite-horizon optimal control problem (FHOCP) that takes into account a suitable number of randomly extracted scenarios of uncertainty and disturbances, followed by a specific command selection rule implemented in a receding horizon fashion. The scenario FHOCP is always convex, also when the uncertain parameters and disturbance belong to non-convex sets, and irrespective of how the model uncertainty influences the system's matrices. Moreover, the computational complexity of the proposed approach does not depend on the uncertainty/disturbance dimensions, and scales quadratically with the control horizon. The main result in this paper is related to the analysis of the closed loop system under receding-horizon implementation of the scenario FHOCP, and essentially states that the devised control law guarantees constraint satisfaction at each step with some a-priori assigned probability p, while the system's state reaches the target set either asymptotically, or in finite time with probability at least p. The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in the IEEE Transactions on Automatic Control, with DOI: 10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of record will be available at http://ieeexplore.ieee.or

    Real-Time Motion Planning of Legged Robots: A Model Predictive Control Approach

    Full text link
    We introduce a real-time, constrained, nonlinear Model Predictive Control for the motion planning of legged robots. The proposed approach uses a constrained optimal control algorithm known as SLQ. We improve the efficiency of this algorithm by introducing a multi-processing scheme for estimating value function in its backward pass. This pass has been often calculated as a single process. This parallel SLQ algorithm can optimize longer time horizons without proportional increase in its computation time. Thus, our MPC algorithm can generate optimized trajectories for the next few phases of the motion within only a few milliseconds. This outperforms the state of the art by at least one order of magnitude. The performance of the approach is validated on a quadruped robot for generating dynamic gaits such as trotting.Comment: 8 page

    On Stochastic Model Predictive Control with Bounded Control Inputs

    Full text link
    This paper is concerned with the problem of Model Predictive Control and Rolling Horizon Control of discrete-time systems subject to possibly unbounded random noise inputs, while satisfying hard bounds on the control inputs. We use a nonlinear feedback policy with respect to noise measurements and show that the resulting mathematical program has a tractable convex solution in both cases. Moreover, under the assumption that the zero-input and zero-noise system is asymptotically stable, we show that the variance of the state, under the resulting Model Predictive Control and Rolling Horizon Control policies, is bounded. Finally, we provide some numerical examples on how certain matrices in the underlying mathematical program can be calculated off-line.Comment: 8 page

    Stochastic MPC Design for a Two-Component Granulation Process

    Full text link
    We address the issue of control of a stochastic two-component granulation process in pharmaceutical applications through using Stochastic Model Predictive Control (SMPC) and model reduction to obtain the desired particle distribution. We first use the method of moments to reduce the governing integro-differential equation down to a nonlinear ordinary differential equation (ODE). This reduced-order model is employed in the SMPC formulation. The probabilistic constraints in this formulation keep the variance of particles' drug concentration in an admissible range. To solve the resulting stochastic optimization problem, we first employ polynomial chaos expansion to obtain the Probability Distribution Function (PDF) of the future state variables using the uncertain variables' distributions. As a result, the original stochastic optimization problem for a particulate system is converted to a deterministic dynamic optimization. This approximation lessens the computation burden of the controller and makes its real time application possible.Comment: American control Conference, May, 201
    • …
    corecore