1,965 research outputs found
Model predictive control techniques for hybrid systems
This paper describes the main issues encountered when applying model predictive control to hybrid processes. Hybrid model predictive control (HMPC) is a research field non-fully developed with many open challenges. The paper describes some of the techniques proposed by the research community to overcome the main problems encountered. Issues related to the stability and the solution of the optimization problem are also discussed. The paper ends by describing the results of a benchmark exercise in which several HMPC schemes were applied to a solar air conditioning plant.Ministerio de Eduación y Ciencia DPI2007-66718-C04-01Ministerio de Eduación y Ciencia DPI2008-0581
Integral MRAC with Minimal Controller Synthesis and bounded adaptive gains: The continuous-time case
Model reference adaptive controllers designed via the Minimal Control Synthesis (MCS) approach are a viable solution to control plants affected by parameter uncertainty, unmodelled dynamics, and disturbances. Despite its effectiveness to impose the required reference dynamics, an apparent drift of the adaptive gains, which can eventually lead to closed-loop instability or alter tracking performance, may occasionally be induced by external disturbances. This problem has been recently addressed for this class of adaptive algorithms in the discrete-time case and for square-integrable perturbations by using a parameter projection strategy [1]. In this paper we tackle systematically this issue for MCS continuous-time adaptive systems with integral action by enhancing the adaptive mechanism not only with a parameter projection method, but also embedding a s-modification strategy. The former is used to preserve convergence to zero of the tracking error when the disturbance is bounded and L2, while the latter guarantees global uniform ultimate boundedness under continuous L8 disturbances. In both cases, the proposed control schemes ensure boundedness of all the closed-loop signals. The strategies are numerically validated by considering systems subject to different kinds of disturbances. In addition, an electrical power circuit is used to show the applicability of the algorithms to engineering problems requiring a precise tracking of a reference profile over a long time range despite disturbances, unmodelled dynamics, and parameter uncertainty.Postprint (author's final draft
Formal Synthesis of Control Strategies for Positive Monotone Systems
We design controllers from formal specifications for positive discrete-time
monotone systems that are subject to bounded disturbances. Such systems are
widely used to model the dynamics of transportation and biological networks.
The specifications are described using signal temporal logic (STL), which can
express a broad range of temporal properties. We formulate the problem as a
mixed-integer linear program (MILP) and show that under the assumptions made in
this paper, which are not restrictive for traffic applications, the existence
of open-loop control policies is sufficient and almost necessary to ensure the
satisfaction of STL formulas. We establish a relation between satisfaction of
STL formulas in infinite time and set-invariance theories and provide an
efficient method to compute robust control invariant sets in high dimensions.
We also develop a robust model predictive framework to plan controls optimally
while ensuring the satisfaction of the specification. Illustrative examples and
a traffic management case study are included.Comment: To appear in IEEE Transactions on Automatic Control (TAC) (2018), 16
pages, double colum
On the convergence of stochastic MPC to terminal modes of operation
The stability of stochastic Model Predictive Control (MPC) subject to
additive disturbances is often demonstrated in the literature by constructing
Lyapunov-like inequalities that guarantee closed-loop performance bounds and
boundedness of the state, but convergence to a terminal control law is
typically not shown. In this work we use results on general state space Markov
chains to find conditions that guarantee convergence of disturbed nonlinear
systems to terminal modes of operation, so that they converge in probability to
a priori known terminal linear feedback laws and achieve time-average
performance equal to that of the terminal control law. We discuss implications
for the convergence of control laws in stochastic MPC formulations, in
particular we prove convergence for two formulations of stochastic MPC
Design of of model-based controllers via parametric programming
Imperial Users onl
- …