Article thumbnail
Location of Repository

Performance comparision of different momentum techniques on deep reinforcement learning

By Sarigul M. and Avci M.

Abstract

2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2017 --3 July 2017 through 5 July 2017 -- --Increase in popularity of deep convolutional neural networks in many different areas leads to increase in the use of these networks in reinforcement learning. Training a huge deep neural network structure by using simple gradient descent learning can take quite a long time. Some additional learning approaches should be utilized to solve this problem. One of these techniques is use of momentum which accelerates gradient descent learning. Although momentum techniques are mostly developed for supervised learning problems, it can also be used for reinforcement learning problems. However, its efficiency may vary due to the dissimilarities in two training learning processes. In this paper, the performances of different momentum techniques are compared for one of the reinforcement learning problems; Othello game benchmark. Test results show that the Nesterov accelerated momentum technique provided a more effective generalization on benchmark. © 2017 IEEE

Topics: Deep reinforcement learning, momentum techniques, nesterov momentum
Publisher: Institute of Electrical and Electronics Engineers Inc.
Year: 2017
DOI identifier: 10.1109/INISTA.2017.8001175
OAI identifier: oai:openaccess.cu.edu.tr:20.500.12605/18470
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • https://dx.doi.org/10.1109/INI... (external link)
  • https://hdl.handle.net/20.500.... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.