Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Abstract
In this paper we introduce an online algorithm that uses integral reinforcement knowledge
for learning the continuous-time zero sum game solution for nonlinear systems with
infinite horizon costs and partial knowledge of the system dynamics. This algorithm
is a data based approach to the solution of the Hamilton-Jacobi-Isaacs equation and it
does not require explicit knowledge on the system’s drift dynamics. A novel adaptive
control algorithm is given that is based on policy iteration and implemented using an actor/
disturbance/critic structure having three adaptive approximator structures. All three
approximation networks are adapted simultaneously. A persistence of excitation condition
is required to guarantee convergence of the critic to the actual optimal value function.
Novel adaptive control tuning algorithms are given for critic, disturbance and actor networks.
The convergence to the Nash solution of the game is proven, and stability of the
system is also guaranteed. Simulation examples support the theoretical result