Future frame prediction in videos is a promising avenue for unsupervised
video representation learning. Video frames are naturally generated by the
inherent pixel flows from preceding frames based on the appearance and motion
dynamics in the video. However, existing methods focus on directly
hallucinating pixel values, resulting in blurry predictions. In this paper, we
develop a dual motion Generative Adversarial Net (GAN) architecture, which
learns to explicitly enforce future-frame predictions to be consistent with the
pixel-wise flows in the video through a dual-learning mechanism. The primal
future-frame prediction and dual future-flow prediction form a closed loop,
generating informative feedback signals to each other for better video
prediction. To make both synthesized future frames and flows indistinguishable
from reality, a dual adversarial training method is proposed to ensure that the
future-flow prediction is able to help infer realistic future-frames, while the
future-frame prediction in turn leads to realistic optical flows. Our dual
motion GAN also handles natural motion uncertainty in different pixel locations
with a new probabilistic motion encoder, which is based on variational
autoencoders. Extensive experiments demonstrate that the proposed dual motion
GAN significantly outperforms state-of-the-art approaches on synthesizing new
video frames and predicting future flows. Our model generalizes well across
diverse visual scenes and shows superiority in unsupervised video
representation learning.Comment: ICCV 17 camera read