Temporal Difference Learning
- Publication date
- Publisher
Abstract
Reinforcement learning, in general, has not been totally successful at solving complex realworld problems which can be described by nonlinear functions. However, temporal difference learning is a type of reinforcement learning algorithm that has been researched and applied to various prediction problems with promising results. This paper discusses the application of temporal-difference learning in the training of a neural network to play a scaled-down version of the board game Chinese Chess. Preliminary results show that this technique is favorable for producing desired results. In test cases where minimal factors of the game are presented, the network responds favorably. However, when introducing more complexity, the network does not function as well, but generally produces reasonable results. These results indicate that temporal difference learning has the potential to solve real-world problems of equal or greater complexity. Continuing research in the application of neural networks to complex strategic games will most likely lead to more responsive and accurate systems in the future