5,078 research outputs found
Playing Atari with Deep Reinforcement Learning
We present the first deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is a convolutional neural network, trained with a variant
of Q-learning, whose input is raw pixels and whose output is a value function
estimating future rewards. We apply our method to seven Atari 2600 games from
the Arcade Learning Environment, with no adjustment of the architecture or
learning algorithm. We find that it outperforms all previous approaches on six
of the games and surpasses a human expert on three of them.Comment: NIPS Deep Learning Workshop 201
Learning Actions and Control of Focus of Attention with a Log-Polar-like Sensor
With the long-term goal of reducing the image processing time on an
autonomous mobile robot in mind we explore in this paper the use of log-polar
like image data with gaze control. The gaze control is not done on the
Cartesian image but on the log-polar like image data. For this we start out
from the classic deep reinforcement learning approach for Atari games. We
extend an A3C deep RL approach with an LSTM network, and we learn the policy
for playing three Atari games and a policy for gaze control. While the Atari
games already use low-resolution images of 80 by 80 pixels, we are able to
further reduce the amount of image pixels by a factor of 5 without losing any
gaming performance
Deep learning for video game playing
In this article, we review recent Deep Learning advances in the context of
how they have been applied to play different types of video games such as
first-person shooters, arcade games, and real-time strategy games. We analyze
the unique requirements that different game genres pose to a deep learning
system and highlight important open challenges in the context of applying these
machine learning methods to video games, such as general game playing, dealing
with extremely large decision spaces and sparse rewards
- …