34 research outputs found
Self-supervised learning of a facial attribute embedding from video
We propose a self-supervised framework for learning facial attributes by
simply watching videos of a human face speaking, laughing, and moving over
time. To perform this task, we introduce a network, Facial Attributes-Net
(FAb-Net), that is trained to embed multiple frames from the same video
face-track into a common low-dimensional space. With this approach, we make
three contributions: first, we show that the network can leverage information
from multiple source frames by predicting confidence/attention masks for each
frame; second, we demonstrate that using a curriculum learning regime improves
the learned embedding; finally, we demonstrate that the network learns a
meaningful face embedding that encodes information about head pose, facial
landmarks and facial expression, i.e. facial attributes, without having been
supervised with any labelled data. We are comparable or superior to
state-of-the-art self-supervised methods on these tasks and approach the
performance of supervised methods.Comment: To appear in BMVC 2018. Supplementary material can be found at
http://www.robots.ox.ac.uk/~vgg/research/unsup_learn_watch_faces/fabnet.htm
Interspecies Knowledge Transfer for Facial Keypoint Detection
We present a method for localizing facial keypoints on animals by
transferring knowledge gained from human faces. Instead of directly finetuning
a network trained to detect keypoints on human faces to animal faces (which is
sub-optimal since human and animal faces can look quite different), we propose
to first adapt the animal images to the pre-trained human detection network by
correcting for the differences in animal and human face shape. We first find
the nearest human neighbors for each animal image using an unsupervised shape
matching method. We use these matches to train a thin plate spline warping
network to warp each animal face to look more human-like. The warping network
is then jointly finetuned with a pre-trained human facial keypoint detection
network using an animal dataset. We demonstrate state-of-the-art results on
both horse and sheep facial keypoint detection, and significant improvement
over simple finetuning, especially when training data is scarce. Additionally,
we present a new dataset with 3717 images with horse face and facial keypoint
annotations.Comment: CVPR 2017 Camera Read
BRUL\`E: Barycenter-Regularized Unsupervised Landmark Extraction
Unsupervised retrieval of image features is vital for many computer vision
tasks where the annotation is missing or scarce. In this work, we propose a new
unsupervised approach to detect the landmarks in images, validating it on the
popular task of human face key-points extraction. The method is based on the
idea of auto-encoding the wanted landmarks in the latent space while discarding
the non-essential information (and effectively preserving the
interpretability). The interpretable latent space representation (the
bottleneck containing nothing but the wanted key-points) is achieved by a new
two-step regularization approach. The first regularization step evaluates
transport distance from a given set of landmarks to some average value (the
barycenter by Wasserstein distance). The second regularization step controls
deviations from the barycenter by applying random geometric deformations
synchronously to the initial image and to the encoded landmarks. We demonstrate
the effectiveness of the approach both in unsupervised and semi-supervised
training scenarios using 300-W, CelebA, and MAFL datasets. The proposed
regularization paradigm is shown to prevent overfitting, and the detection
quality is shown to improve beyond the state-of-the-art face models.Comment: 10 main pages with 6 figures and 1 Table, 14 pages total with 6
supplementary figures. I.B. and N.B. contributed equally. D.V.D. is
corresponding autho
Unsupervised learning of object landmarks by factorized spatial embeddings
Learning automatically the structure of object categories remains an
important open problem in computer vision. In this paper, we propose a novel
unsupervised approach that can discover and learn landmarks in object
categories, thus characterizing their structure. Our approach is based on
factorizing image deformations, as induced by a viewpoint change or an object
deformation, by learning a deep neural network that detects landmarks
consistently with such visual effects. Furthermore, we show that the learned
landmarks establish meaningful correspondences between different object
instances in a category without having to impose this requirement explicitly.
We assess the method qualitatively on a variety of object types, natural and
man-made. We also show that our unsupervised landmarks are highly predictive of
manually-annotated landmarks in face benchmark datasets, and can be used to
regress these with a high degree of accuracy.Comment: To be published in ICCV 201