148 research outputs found
A receding horizon generalization of pointwise min-norm controllers
Control Lyapunov functions (CLFs) are used in conjunction with receding horizon control to develop a new class of receding horizon control schemes. In the process, strong connections between the seemingly disparate approaches are revealed, leading to a unified picture that ties together the notions of pointwise min-norm, receding horizon, and optimal control. This framework is used to develop a CLF based receding horizon scheme, of which a special case provides an appropriate extension of Sontag's formula. The scheme is first presented as an idealized continuous-time receding horizon control law. The issue of implementation under discrete-time sampling is then discussed as a modification. These schemes are shown to possess a number of desirable theoretical and implementation properties. An example is provided, demonstrating their application to a nonlinear control problem. Finally, stronger connections to both optimal and pointwise min-norm control are proved
Optimal Switching Synthesis for Jump Linear Systems with Gaussian initial state uncertainty
This paper provides a method to design an optimal switching sequence for jump
linear systems with given Gaussian initial state uncertainty. In the practical
perspective, the initial state contains some uncertainties that come from
measurement errors or sensor inaccuracies and we assume that the type of this
uncertainty has the form of Gaussian distribution. In order to cope with
Gaussian initial state uncertainty and to measure the system performance,
Wasserstein metric that defines the distance between probability density
functions is used. Combining with the receding horizon framework, an optimal
switching sequence for jump linear systems can be obtained by minimizing the
objective function that is expressed in terms of Wasserstein distance. The
proposed optimal switching synthesis also guarantees the mean square stability
for jump linear systems. The validations of the proposed methods are verified
by examples.Comment: ASME Dynamic Systems and Control Conference (DSCC), 201
Multi-agent model predictive control for transport phenomena processes
Throughout the last decades, control systems theory has thrived, promoting new areas
of development, especially for chemical and biological process engineering. Production
processes are becoming more and more complex and researchers, academics and industry professionals dedicate more time in order to keep up-to-date with the increasing complexity and nonlinearity. Developing control architectures and incorporating novel control techniques as a way to overcome optimization problems is the main focus for all people involved.
Nonlinear Model Predictive Control (NMPC) has been one of the main responses
from academia for the exponential growth of process complexity and fast growing scale.
Prediction algorithms are the response to manage closed-loop stability and optimize
results. Adaptation mechanisms are nowadays seen as a natural extension of prediction methodologies in order to tackle uncertainty in distributed parameter systems (DPS), governed by partial differential equations (PDE). Parameters observers and Lyapunov adaptation laws are also tools for the systems in study.
Stability and stabilization conditions, being implicitly or explicitly incorporated in the
NMPC formulation, by means of pointwise min-norm techniques, are also being used and combined as a way to improve control performance, robustness and reduce computational effort or maintain it low, without degrading control action.
With the above assumptions, centralized (or single agent) or decentralized and distributed Model Predictive Control (MPC) architectures (also called multi-agent) have been applied to a series of nonlinear distributed parameters systems with transport phenomena, such as bioreactors, water delivery canals and heat exchangers to show the importance and success of these control techniques
On receding horizon extensions and control Lyapunov functions
Control Lyapunov functions (CLFs) are used in conjunction with receding horizon control (RHC) to develop a new class of control schemes. In the process, strong connections between the seemingly disparate approaches are revealed, leading to a unified picture that ties together the notions of pointwise min-norm, receding horizon, and optimal control. This framework is used to develop a control Lyapunov function based receding horizon scheme, of which a special case provides an appropriate extension of a variation on Sontag's formula. These schemes are shown to possess a number of desirable theoretical and implementation properties. An example is provided, demonstrating their application to a nonlinear control problem
Flexible Lyapunov Functions and Applications to Fast Mechatronic Systems
The property that every control system should posses is stability, which
translates into safety in real-life applications. A central tool in systems
theory for synthesizing control laws that achieve stability are control
Lyapunov functions (CLFs). Classically, a CLF enforces that the resulting
closed-loop state trajectory is contained within a cone with a fixed,
predefined shape, and which is centered at and converges to a desired
converging point. However, such a requirement often proves to be
overconservative, which is why most of the real-time controllers do not have a
stability guarantee. Recently, a novel idea that improves the design of CLFs in
terms of flexibility was proposed. The focus of this new approach is on the
design of optimization problems that allow certain parameters that define a
cone associated with a standard CLF to be decision variables. In this way
non-monotonicity of the CLF is explicitly linked with a decision variable that
can be optimized on-line. Conservativeness is significantly reduced compared to
classical CLFs, which makes \emph{flexible CLFs} more suitable for
stabilization of constrained discrete-time nonlinear systems and real-time
control. The purpose of this overview is to highlight the potential of flexible
CLFs for real-time control of fast mechatronic systems, with sampling periods
below one millisecond, which are widely employed in aerospace and automotive
applications.Comment: 2 figure
Robust Adaptive Control Barrier Functions: An Adaptive & Data-Driven Approach to Safety (Extended Version)
A new framework is developed for control of constrained nonlinear systems
with structured parametric uncertainties. Forward invariance of a safe set is
achieved through online parameter adaptation and data-driven model estimation.
The new adaptive data-driven safety paradigm is merged with a recent adaptive
control algorithm for systems nominally contracting in closed-loop. This
unification is more general than other safety controllers as closed-loop
contraction does not require the system be invertible or in a particular form.
Additionally, the approach is less expensive than nonlinear model predictive
control as it does not require a full desired trajectory, but rather only a
desired terminal state. The approach is illustrated on the pitch dynamics of an
aircraft with uncertain nonlinear aerodynamics.Comment: Added aCBF non-Lipschitz example and discussion on approach
implementatio
Adaptive model predictive control
The problem of model predictive control (MPC) under parametric uncertainties for a
class of nonlinear systems is addressed. An adaptive identi er is used to estimate the pa-
rameters and the state variables simultaneously. The algorithm proposed guarantees the
convergence of parameters and the state variables to their true value. The task is posed as
an adaptive model predictive control problem in which the controller is required to steer the
system to the system setpoint that optimizes a user-speci ed objective function.
The technique of adaptive model predictive control is developed for two broad classes of
systems. The rst class of system considered is a class of uncertain nonlinear systems with
input to state stability property. Using a generalization of the set-based adaptive estimation
technique, the estimates of the parameters and state are updated to guarantee convergence
to a neighborhood of their true value.
The second involves a method of determining appropriate excitation conditions for nonlin-
ear systems. Since the identi cation of the true cost surface is paramount to the success
of the integration scheme, novel parameter estimation techniques with better convergence
properties are developed. The estimation routine allows exact reconstruction of the systems
unknown parameters in nite-time. The applicability of the identi er to improve upon the
performance of existing adaptive controllers is demonstrated. Then, an adaptive nonlinear
model predictive controller strategy is integrated to this estimation algorithm in which ro-
bustness features are incorporated to account for the e ect of the model uncertainty.
To study the practical applicability of the developed method, the estimation of state vari-
ables and unknown parameters in a stirred tank process has been performed. The results of
the experimental application demonstrate the ability of the proposed techniques to estimate
the state variables and parameters of an uncertain practical system.Departamento de IngenierĂa de Sistemas y AutomáticaMáster en InvestigaciĂłn en IngenierĂa de Procesos y Sistemas Industriale
Nonlinear Model Predictive Control of Robotic Systems with Control Lyapunov Functions
The theoretical unification of Nonlinear Model Predictive Control (NMPC) with Control Lyapunov Functions (CLFs) provides a framework for achieving optimal control performance while ensuring stability guarantees. In this paper we present the first real-time realization of a unified NMPC and CLF controller on a robotic system with limited computational resources. These limitations motivate a set of approaches for efficiently incorporating CLF stability constraints into a general NMPC formulation. We evaluate the performance of the proposed methods compared to baseline CLF and NMPC controllers with a robotic Segway platform both in simulation and on hardware. The addition of a prediction horizon provides a performance advantage over CLF based controllers, which operate optimally point-wise in time. Moreover, the explicitly imposed stability constraints remove the need for difficult cost function and parameter tuning required by NMPC. Therefore the unified controller improves the performance of each isolated controller and simplifies the overall design process
Nonlinear Model Predictive Control of Robotic Systems with Control Lyapunov Functions
The theoretical unification of Nonlinear Model Predictive Control (NMPC) with
Control Lyapunov Functions (CLFs) provides a framework for achieving optimal
control performance while ensuring stability guarantees. In this paper we
present the first real-time realization of a unified NMPC and CLF controller on
a robotic system with limited computational resources. These limitations
motivate a set of approaches for efficiently incorporating CLF stability
constraints into a general NMPC formulation. We evaluate the performance of the
proposed methods compared to baseline CLF and NMPC controllers with a robotic
Segway platform both in simulation and on hardware. The addition of a
prediction horizon provides a performance advantage over CLF based controllers,
which operate optimally point-wise in time. Moreover, the explicitly imposed
stability constraints remove the need for difficult cost function and parameter
tuning required by NMPC. Therefore the unified controller improves the
performance of each isolated controller and simplifies the overall design
process
- …