119 research outputs found
TV-GAN: Generative Adversarial Network Based Thermal to Visible Face Recognition
This work tackles the face recognition task on images captured using thermal
camera sensors which can operate in the non-light environment. While it can
greatly increase the scope and benefits of the current security surveillance
systems, performing such a task using thermal images is a challenging problem
compared to face recognition task in the Visible Light Domain (VLD). This is
partly due to the much smaller amount number of thermal imagery data collected
compared to the VLD data. Unfortunately, direct application of the existing
very strong face recognition models trained using VLD data into the thermal
imagery data will not produce a satisfactory performance. This is due to the
existence of the domain gap between the thermal and VLD images. To this end, we
propose a Thermal-to-Visible Generative Adversarial Network (TV-GAN) that is
able to transform thermal face images into their corresponding VLD images
whilst maintaining identity information which is sufficient enough for the
existing VLD face recognition models to perform recognition. Some examples are
presented in Figure 1. Unlike the previous methods, our proposed TV-GAN uses an
explicit closed-set face recognition loss to regularize the discriminator
network training. This information will then be conveyed into the generator
network in the forms of gradient loss. In the experiment, we show that by using
this additional explicit regularization for the discriminator network, the
TV-GAN is able to preserve more identity information when translating a thermal
image of a person which is not seen before by the TV-GAN
Learning Robust Object Recognition Using Composed Scenes from Generative Models
Recurrent feedback connections in the mammalian visual system have been
hypothesized to play a role in synthesizing input in the theoretical framework
of analysis by synthesis. The comparison of internally synthesized
representation with that of the input provides a validation mechanism during
perceptual inference and learning. Inspired by these ideas, we proposed that
the synthesis machinery can compose new, unobserved images by imagination to
train the network itself so as to increase the robustness of the system in
novel scenarios. As a proof of concept, we investigated whether images composed
by imagination could help an object recognition system to deal with occlusion,
which is challenging for the current state-of-the-art deep convolutional neural
networks. We fine-tuned a network on images containing objects in various
occlusion scenarios, that are imagined or self-generated through a deep
generator network. Trained on imagined occluded scenarios under the object
persistence constraint, our network discovered more subtle and localized image
features that were neglected by the original network for object classification,
obtaining better separability of different object classes in the feature space.
This leads to significant improvement of object recognition under occlusion for
our network relative to the original network trained only on un-occluded
images. In addition to providing practical benefits in object recognition under
occlusion, this work demonstrates the use of self-generated composition of
visual scenes through the synthesis loop, combined with the object persistence
constraint, can provide opportunities for neural networks to discover new
relevant patterns in the data, and become more flexible in dealing with novel
situations.Comment: Accepted by 14th Conference on Computer and Robot Visio
Hierarchical Video Generation from Orthogonal Information: Optical Flow and Texture
Learning to represent and generate videos from unlabeled data is a very
challenging problem. To generate realistic videos, it is important not only to
ensure that the appearance of each frame is real, but also to ensure the
plausibility of a video motion and consistency of a video appearance in the
time direction. The process of video generation should be divided according to
these intrinsic difficulties. In this study, we focus on the motion and
appearance information as two important orthogonal components of a video, and
propose Flow-and-Texture-Generative Adversarial Networks (FTGAN) consisting of
FlowGAN and TextureGAN. In order to avoid a huge annotation cost, we have to
explore a way to learn from unlabeled data. Thus, we employ optical flow as
motion information to generate videos. FlowGAN generates optical flow, which
contains only the edge and motion of the videos to be begerated. On the other
hand, TextureGAN specializes in giving a texture to optical flow generated by
FlowGAN. This hierarchical approach brings more realistic videos with plausible
motion and appearance consistency. Our experiments show that our model
generates more plausible motion videos and also achieves significantly improved
performance for unsupervised action classification in comparison to previous
GAN works. In addition, because our model generates videos from two independent
information, our model can generate new combinations of motion and attribute
that are not seen in training data, such as a video in which a person is doing
sit-up in a baseball ground.Comment: Our supplemental material is available on
http://www.mi.t.u-tokyo.ac.jp/assets/publication/hierarchical_video_generation_sup/
Accepted to AAAI201
- …