2 research outputs found

    Safe Reinforcement Learning Control for Water Distribution Networks

    Get PDF

    Safe Intermittent Reinforcement Learning for Nonlinear Systems

    No full text
    In this paper, an online intermittent actor-critic reinforcement learning method is used to stabilize nonlinear systems optimally while also guaranteeing safety. A barrier function-based transformation is introduced to ensure that the system does not violate the user-defined safety constraints. It is shown that the safety constraints of the original system can be guaranteed by assuring the stability of the equilibrium point of an appropriately transformed system. Then, an online intermittent actor-critic learning framework is developed to learn the optimal safe intermittent controller. Also, Zeno behavior is guaranteed to be excluded. Finally, numerical examples are conducted to verify the efficacy of the learning algorithm
    corecore