6 research outputs found
PoPS: Policy Pruning and Shrinking for Deep Reinforcement Learning
The recent success of deep neural networks (DNNs) for function approximation
in reinforcement learning has triggered the development of Deep Reinforcement
Learning (DRL) algorithms in various fields, such as robotics, computer games,
natural language processing, computer vision, sensing systems, and wireless
networking. Unfortunately, DNNs suffer from high computational cost and memory
consumption, which limits the use of DRL algorithms in systems with limited
hardware resources. In recent years, pruning algorithms have demonstrated
considerable success in reducing the redundancy of DNNs in classification
tasks. However, existing algorithms suffer from a significant performance
reduction in the DRL domain. In this paper, we develop the first effective
solution to the performance reduction problem of pruning in the DRL domain, and
establish a working algorithm, named Policy Pruning and Shrinking (PoPS), to
train DRL models with strong performance while achieving a compact
representation of the DNN. The framework is based on a novel iterative policy
pruning and shrinking method that leverages the power of transfer learning when
training the DRL model. We present an extensive experimental study that
demonstrates the strong performance of PoPS using the popular Cartpole, Lunar
Lander, Pong, and Pacman environments. Finally, we develop an open source
software for the benefit of researchers and developers in related fields.Comment: This paper has been accepted for publication in the IEEE Journal of
Selected Topics in Signal Processin