13 research outputs found
Assessing the Potential of Classical Q-learning in General Game Playing
After the recent groundbreaking results of AlphaGo and AlphaZero, we have
seen strong interests in deep reinforcement learning and artificial general
intelligence (AGI) in game playing. However, deep learning is
resource-intensive and the theory is not yet well developed. For small games,
simple classical table-based Q-learning might still be the algorithm of choice.
General Game Playing (GGP) provides a good testbed for reinforcement learning
to research AGI. Q-learning is one of the canonical reinforcement learning
methods, and has been used by (Banerjee Stone, IJCAI 2007) in GGP. In this
paper we implement Q-learning in GGP for three small-board games (Tic-Tac-Toe,
Connect Four, Hex)\footnote{source code: https://github.com/wh1992v/ggp-rl}, to
allow comparison to Banerjee et al.. We find that Q-learning converges to a
high win rate in GGP. For the -greedy strategy, we propose a first
enhancement, the dynamic algorithm. In addition, inspired by (Gelly
Silver, ICML 2007) we combine online search (Monte Carlo Search) to
enhance offline learning, and propose QM-learning for GGP. Both enhancements
improve the performance of classical Q-learning. In this work, GGP allows us to
show, if augmented by appropriate enhancements, that classical table-based
Q-learning can perform well in small games.Comment: arXiv admin note: substantial text overlap with arXiv:1802.0594
Assessing the Potential of Classical Q-learning in General Game Playing
After the recent groundbreaking results of AlphaGo and AlphaZero, we have seen strong interests in deep reinforcement learning and artificial general intelligence (AGI) in game playing. However, deep learning is resource-intensive and the theory is not yet well developed. For small games, simple classical table-based Q-learning might still be the algorithm of choice. General Game Playing (GGP) provides a good testbed for reinforcement learning to research AGI. Q-learning is one of the canonical reinforcement learning methods, and has been used by (Banerjee & Stone, IJCAI 2007) in GGP. In this paper we implement Q-learning in GGP for three small-board games (Tic-Tac-Toe, Connect Four, Hex), to allow comparison to Banerjee et al. We find that Q-learning converges to a high win rate in GGP. For the聽系" role="presentation" style="display: inline-table; line-height: normal; letter-spacing: normal; word-spacing: normal; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border-width: 0px; border-style: initial; position: relative;">系-greedy strategy, we propose a first enhancement, the dynamic聽系" role="presentation" style="display: inline-table; line-height: normal; letter-spacing: normal; word-spacing: normal; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border-width: 0px; border-style: initial; position: relative;">系聽algorithm. In addition, inspired by (Gelly & Silver, ICML 2007) we combine online search (Monte Carlo Search) to enhance offline learning, and propose QM-learning for GGP. Both enhancements improve the performance of classical Q-learning. In this work, GGP allows us to show, if augmented by appropriate enhancements, that classical table-based Q-learning can perform well in small games.Computer Systems, Imagery and Medi