41 research outputs found
Periodic optimal control, dissipativity and MPC
Recent research has established the importance of dissipativity for proving stability of economic MPC in the case of a steady state. In many cases, though, steady state operation is not economically optimal and periodic operation of the system yields a better performance. In this paper, we propose three different ways of extending the notion of dissipativity for periodic systems and illustrate them with three examples
Economic MPC of Nonlinear Systems with Non-Monotonic Lyapunov Functions and Its Application to HVAC Control
This paper proposes a Lyapunov-based economic MPC scheme for nonlinear sytems
with non-monotonic Lyapunov functions. Relaxed Lyapunov-based constraints are
used in the MPC formulation to improve the economic performance. These
constraints will enforce a Lyapunov decrease after every few steps. Recursive
feasibility and asymptotical convergence to the steady state can be achieved
using Lyapunov-like stability analysis. The proposed economic MPC can be
applied to minimize energy consumption in HVAC control of commercial buildings.
The Lyapunov-based constraints in the online MPC problem enable the tracking of
the desired set-point temperature. The performance is demonstrated by a virtual
building composed of two adjacent zones
Stochastic Learning of Energy System for Data-Driven Control in Manufacturing Process
To overcome the environmental impacts of a manufacturing factory over its life cycle, the role of sustainable energy effectiveness is vital. For this reason, implementing energy conservation technologies to empower energy efficiency has become an important business for the majority of manufacturing plants. Data-driven control setups seem to be a novel idea to handle the energy efficiency of such complex systems, while machine learning is becoming well-known in the system engineering community. In this paper, a new approach together with optimal control application is considered to open promising energy-saving ideas through investigating machines of a factory using machine learning, specifically, Gaussian Processes Regression (GPR), where the model is built by correlating the dynamics, complexity, and interrelated energy consumption recordings. We connect the idea with controlling a manufacturing system energy in an optimized way, where the Model Predictive Control loop delivers optimal solutions for each control time step. In the end, a numerical example is demonstrated to give a clear picture of the proposed modelling method potentials
Data-driven Economic NMPC using Reinforcement Learning
Reinforcement Learning (RL) is a powerful tool to perform data-driven optimal
control without relying on a model of the system. However, RL struggles to
provide hard guarantees on the behavior of the resulting control scheme. In
contrast, Nonlinear Model Predictive Control (NMPC) and Economic NMPC (ENMPC)
are standard tools for the closed-loop optimal control of complex systems with
constraints and limitations, and benefit from a rich theory to assess their
closed-loop behavior. Unfortunately, the performance of (E)NMPC hinges on the
quality of the model underlying the control scheme. In this paper, we show that
an (E)NMPC scheme can be tuned to deliver the optimal policy of the real system
even when using a wrong model. This result also holds for real systems having
stochastic dynamics. This entails that ENMPC can be used as a new type of
function approximator within RL. Furthermore, we investigate our results in the
context of ENMPC and formally connect them to the concept of dissipativity,
which is central for the ENMPC stability. Finally, we detail how these results
can be used to deploy classic RL tools for tuning (E)NMPC schemes. We apply
these tools on both a classical linear MPC setting and a standard nonlinear
example from the ENMPC literature