4,297 research outputs found
Online-Computation Approach to Optimal Control of Noise-Affected Nonlinear Systems with Continuous State and Control Spaces
© 2007 EUCA.A novel online-computation approach to optimal control of nonlinear, noise-affected systems with continuous state and control spaces is presented. In the proposed algorithm, system noise is explicitly incorporated into the control decision. This leads to superior results compared to state-of-the-art nonlinear controllers that neglect this influence. The solution of an optimal nonlinear controller for a corresponding deterministic system is employed to find a meaningful state space restriction. This restriction is obtained by means of approximate state prediction using the noisy system equation. Within this constrained state space, an optimal closed-loop solution for a finite decision-making horizon (prediction horizon) is determined within an adaptively restricted optimization space. Interleaving stochastic dynamic programming and value function approximation yields a solution to the considered optimal control problem. The enhanced performance of the proposed discrete-time controller is illustrated by means of a scalar example system. Nonlinear model predictive control is applied to address approximate treatment of infinite-horizon problems by the finite-horizon controller
Data-driven Economic NMPC using Reinforcement Learning
Reinforcement Learning (RL) is a powerful tool to perform data-driven optimal
control without relying on a model of the system. However, RL struggles to
provide hard guarantees on the behavior of the resulting control scheme. In
contrast, Nonlinear Model Predictive Control (NMPC) and Economic NMPC (ENMPC)
are standard tools for the closed-loop optimal control of complex systems with
constraints and limitations, and benefit from a rich theory to assess their
closed-loop behavior. Unfortunately, the performance of (E)NMPC hinges on the
quality of the model underlying the control scheme. In this paper, we show that
an (E)NMPC scheme can be tuned to deliver the optimal policy of the real system
even when using a wrong model. This result also holds for real systems having
stochastic dynamics. This entails that ENMPC can be used as a new type of
function approximator within RL. Furthermore, we investigate our results in the
context of ENMPC and formally connect them to the concept of dissipativity,
which is central for the ENMPC stability. Finally, we detail how these results
can be used to deploy classic RL tools for tuning (E)NMPC schemes. We apply
these tools on both a classical linear MPC setting and a standard nonlinear
example from the ENMPC literature
- …