Motivated by vision-based reinforcement learning (RL) problems, in particular
Atari games from the recent benchmark Aracade Learning Environment (ALE), we
consider spatio-temporal prediction problems where future (image-)frames are
dependent on control variables or actions as well as previous frames. While not
composed of natural scenes, frames in Atari games are high-dimensional in size,
can involve tens of objects with one or more objects being controlled by the
actions directly and many other objects being influenced indirectly, can
involve entry and departure of objects, and can involve deep partial
observability. We propose and evaluate two deep neural network architectures
that consist of encoding, action-conditional transformation, and decoding
layers based on convolutional neural networks and recurrent neural networks.
Experimental results show that the proposed architectures are able to generate
visually-realistic frames that are also useful for control over approximately
100-step action-conditional futures in some games. To the best of our
knowledge, this paper is the first to make and evaluate long-term predictions
on high-dimensional video conditioned by control inputs.Comment: Published at NIPS 2015 (Advances in Neural Information Processing
Systems 28