42 research outputs found
Hierarchical Video Generation from Orthogonal Information: Optical Flow and Texture
Learning to represent and generate videos from unlabeled data is a very
challenging problem. To generate realistic videos, it is important not only to
ensure that the appearance of each frame is real, but also to ensure the
plausibility of a video motion and consistency of a video appearance in the
time direction. The process of video generation should be divided according to
these intrinsic difficulties. In this study, we focus on the motion and
appearance information as two important orthogonal components of a video, and
propose Flow-and-Texture-Generative Adversarial Networks (FTGAN) consisting of
FlowGAN and TextureGAN. In order to avoid a huge annotation cost, we have to
explore a way to learn from unlabeled data. Thus, we employ optical flow as
motion information to generate videos. FlowGAN generates optical flow, which
contains only the edge and motion of the videos to be begerated. On the other
hand, TextureGAN specializes in giving a texture to optical flow generated by
FlowGAN. This hierarchical approach brings more realistic videos with plausible
motion and appearance consistency. Our experiments show that our model
generates more plausible motion videos and also achieves significantly improved
performance for unsupervised action classification in comparison to previous
GAN works. In addition, because our model generates videos from two independent
information, our model can generate new combinations of motion and attribute
that are not seen in training data, such as a video in which a person is doing
sit-up in a baseball ground.Comment: Our supplemental material is available on
http://www.mi.t.u-tokyo.ac.jp/assets/publication/hierarchical_video_generation_sup/
Accepted to AAAI201
Unsupervised Learning of Visual Structure using Predictive Generative Networks
The ability to predict future states of the environment is a central pillar
of intelligence. At its core, effective prediction requires an internal model
of the world and an understanding of the rules by which the world changes.
Here, we explore the internal models developed by deep neural networks trained
using a loss based on predicting future frames in synthetic video sequences,
using a CNN-LSTM-deCNN framework. We first show that this architecture can
achieve excellent performance in visual sequence prediction tasks, including
state-of-the-art performance in a standard 'bouncing balls' dataset (Sutskever
et al., 2009). Using a weighted mean-squared error and adversarial loss
(Goodfellow et al., 2014), the same architecture successfully extrapolates
out-of-the-plane rotations of computer-generated faces. Furthermore, despite
being trained end-to-end to predict only pixel-level information, our
Predictive Generative Networks learn a representation of the latent structure
of the underlying three-dimensional objects themselves. Importantly, we find
that this representation is naturally tolerant to object transformations, and
generalizes well to new tasks, such as classification of static images. Similar
models trained solely with a reconstruction loss fail to generalize as
effectively. We argue that prediction can serve as a powerful unsupervised loss
for learning rich internal representations of high-level object features.Comment: under review as conference paper at ICLR 201
Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation
The recent advances in deep learning have made it possible to generate
photo-realistic images by using neural networks and even to extrapolate video
frames from an input video clip. In this paper, for the sake of both furthering
this exploration and our own interest in a realistic application, we study
image-to-video translation and particularly focus on the videos of facial
expressions. This problem challenges the deep neural networks by another
temporal dimension comparing to the image-to-image translation. Moreover, its
single input image fails most existing video generation methods that rely on
recurrent models. We propose a user-controllable approach so as to generate
video clips of various lengths from a single face image. The lengths and types
of the expressions are controlled by users. To this end, we design a novel
neural network architecture that can incorporate the user input into its skip
connections and propose several improvements to the adversarial training method
for the neural network. Experiments and user studies verify the effectiveness
of our approach. Especially, we would like to highlight that even for the face
images in the wild (downloaded from the Web and the authors' own photos), our
model can generate high-quality facial expression videos of which about 50\%
are labeled as real by Amazon Mechanical Turk workers.Comment: 10 page
Temporal Interpolation via Motion Field Prediction
Navigated 2D multi-slice dynamic Magnetic Resonance (MR) imaging enables high
contrast 4D MR imaging during free breathing and provides in-vivo observations
for treatment planning and guidance. Navigator slices are vital for
retrospective stacking of 2D data slices in this method. However, they also
prolong the acquisition sessions. Temporal interpolation of navigator slices an
be used to reduce the number of navigator acquisitions without degrading
specificity in stacking. In this work, we propose a convolutional neural
network (CNN) based method for temporal interpolation via motion field
prediction. The proposed formulation incorporates the prior knowledge that a
motion field underlies changes in the image intensities over time. Previous
approaches that interpolate directly in the intensity space are prone to
produce blurry images or even remove structures in the images. Our method
avoids such problems and faithfully preserves the information in the image.
Further, an important advantage of our formulation is that it provides an
unsupervised estimation of bi-directional motion fields. We show that these
motion fields can be used to halve the number of registrations required during
4D reconstruction, thus substantially reducing the reconstruction time.Comment: Submitted to 1st Conference on Medical Imaging with Deep Learning
(MIDL 2018), Amsterdam, The Netherland
Unsupervised state representation learning with robotic priors: a robustness benchmark
Our understanding of the world depends highly on our capacity to produce
intuitive and simplified representations which can be easily used to solve
problems. We reproduce this simplification process using a neural network to
build a low dimensional state representation of the world from images acquired
by a robot. As in Jonschkowski et al. 2015, we learn in an unsupervised way
using prior knowledge about the world as loss functions called robotic priors
and extend this approach to high dimension richer images to learn a 3D
representation of the hand position of a robot from RGB images. We propose a
quantitative evaluation of the learned representation using nearest neighbors
in the state space that allows to assess its quality and show both the
potential and limitations of robotic priors in realistic environments. We
augment image size, add distractors and domain randomization, all crucial
components to achieve transfer learning to real robots. Finally, we also
contribute a new prior to improve the robustness of the representation. The
applications of such low dimensional state representation range from easing
reinforcement learning (RL) and knowledge transfer across tasks, to
facilitating learning from raw data with more efficient and compact high level
representations. The results show that the robotic prior approach is able to
extract high level representation as the 3D position of an arm and organize it
into a compact and coherent space of states in a challenging dataset.Comment: ICRA 2018 submissio