3 research outputs found

    Event-triggered robust control for multi-player nonzero-sum games with input constraints and mismatched uncertainties

    Get PDF
    In this article, an event-triggered robust control (ETRC) method is investigated for multi-player nonzero-sum games of continuous-time input constrained nonlinear systems with mismatched uncertainties. By constructing an auxiliary system and designing an appropriate value function, the robust control problem of input constrained nonlinear systems is transformed into an optimal regulation problem. Then, a critic neural network (NN) is adopted to approximate the value function of each player for solving the event-triggered coupled Hamilton-Jacobi equation and obtaining control laws. Based on a designed event-triggering condition, control laws are updated when events occur only. Thus, both computational burden and communication bandwidth are reduced. We prove that the weight approximation errors of critic NNs and the closed-loop uncertain multi-player system states are all uniformly ultimately bounded thanks to the Lyapunov's direct method. Finally, two examples are provided to demonstrate the effectiveness of the developed ETRC method

    Composite experience replay based deep reinforcement learning with application in wind farm control

    Get PDF
    In this article, a deep reinforcement learning (RL)-based control approach with enhanced learning efficiency and effectiveness is proposed to address the wind farm control problem. Specifically, a novel composite experience replay (CER) strategy is designed and embedded in the deep deterministic policy gradient (DDPG) algorithm. CER provides a new sampling scheme that can mine the information of stored transitions in-depth by making a tradeoff between rewards and temporal difference (TD) errors. Modified importance-sampling weights are introduced to the training process of neural networks (NNs) to deal with the distribution mismatching problem induced by CER. Then, our CER-DDPG approach is applied to optimizing the total power production of wind farms. The main challenge of this control problem comes from the strong wake effects among wind turbines and the stochastic features of environments, rendering it intractable for conventional control approaches. A reward regularization process is designed along with the CER-DDPG, which employs an additional NN to handle the bias of rewards caused by the stochastic wind speeds. Tests with a dynamic wind farm simulator (WFSim) show that our method achieves higher rewards with less training costs than conventional deep RL-based control approaches, and it has the ability to increase the total power generation of wind farms with different specifications
    corecore