3,946 research outputs found
Smile detection in the wild based on transfer learning
Smile detection from unconstrained facial images is a specialized and
challenging problem. As one of the most informative expressions, smiles convey
basic underlying emotions, such as happiness and satisfaction, which lead to
multiple applications, e.g., human behavior analysis and interactive
controlling. Compared to the size of databases for face recognition, far less
labeled data is available for training smile detection systems. To leverage the
large amount of labeled data from face recognition datasets and to alleviate
overfitting on smile detection, an efficient transfer learning-based smile
detection approach is proposed in this paper. Unlike previous works which use
either hand-engineered features or train deep convolutional networks from
scratch, a well-trained deep face recognition model is explored and fine-tuned
for smile detection in the wild. Three different models are built as a result
of fine-tuning the face recognition model with different inputs, including
aligned, unaligned and grayscale images generated from the GENKI-4K dataset.
Experiments show that the proposed approach achieves improved state-of-the-art
performance. Robustness of the model to noise and blur artifacts is also
evaluated in this paper
Learning Residual Images for Face Attribute Manipulation
Face attributes are interesting due to their detailed description of human
faces. Unlike prior researches working on attribute prediction, we address an
inverse and more challenging problem called face attribute manipulation which
aims at modifying a face image according to a given attribute value. Instead of
manipulating the whole image, we propose to learn the corresponding residual
image defined as the difference between images before and after the
manipulation. In this way, the manipulation can be operated efficiently with
modest pixel modification. The framework of our approach is based on the
Generative Adversarial Network. It consists of two image transformation
networks and a discriminative network. The transformation networks are
responsible for the attribute manipulation and its dual operation and the
discriminative network is used to distinguish the generated images from real
images. We also apply dual learning to allow transformation networks to learn
from each other. Experiments show that residual images can be effectively
learned and used for attribute manipulations. The generated images remain most
of the details in attribute-irrelevant areas
Unsupervised learning of clutter-resistant visual representations from natural videos
Populations of neurons in inferotemporal cortex (IT) maintain an explicit
code for object identity that also tolerates transformations of object
appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning
rules are not known, recent results [4, 5, 6] suggest the operation of an
unsupervised temporal-association-based method e.g., Foldiak's trace rule [7].
Such methods exploit the temporal continuity of the visual world by assuming
that visual experience over short timescales will tend to have invariant
identity content. Thus, by associating representations of frames from nearby
times, a representation that tolerates whatever transformations occurred in the
video may be achieved. Many previous studies verified that such rules can work
in simple situations without background clutter, but the presence of visual
clutter has remained problematic for this approach. Here we show that temporal
association based on large class-specific filters (templates) avoids the
problem of clutter. Our system learns in an unsupervised way from natural
videos gathered from the internet, and is able to perform a difficult
unconstrained face recognition task on natural images: Labeled Faces in the
Wild [8]
Every Smile is Unique: Landmark-Guided Diverse Smile Generation
Each smile is unique: one person surely smiles in different ways (e.g.,
closing/opening the eyes or mouth). Given one input image of a neutral face,
can we generate multiple smile videos with distinctive characteristics? To
tackle this one-to-many video generation problem, we propose a novel deep
learning architecture named Conditional Multi-Mode Network (CMM-Net). To better
encode the dynamics of facial expressions, CMM-Net explicitly exploits facial
landmarks for generating smile sequences. Specifically, a variational
auto-encoder is used to learn a facial landmark embedding. This single
embedding is then exploited by a conditional recurrent network which generates
a landmark embedding sequence conditioned on a specific expression (e.g.,
spontaneous smile). Next, the generated landmark embeddings are fed into a
multi-mode recurrent landmark generator, producing a set of landmark sequences
still associated to the given smile class but clearly distinct from each other.
Finally, these landmark sequences are translated into face videos. Our
experimental results demonstrate the effectiveness of our CMM-Net in generating
realistic videos of multiple smile expressions.Comment: Accepted as a poster in Conference on Computer Vision and Pattern
Recognition (CVPR), 201
- …