3,980 research outputs found
Quantifying Facial Age by Posterior of Age Comparisons
We introduce a novel approach for annotating large quantity of in-the-wild
facial images with high-quality posterior age distribution as labels. Each
posterior provides a probability distribution of estimated ages for a face. Our
approach is motivated by observations that it is easier to distinguish who is
the older of two people than to determine the person's actual age. Given a
reference database with samples of known ages and a dataset to label, we can
transfer reliable annotations from the former to the latter via
human-in-the-loop comparisons. We show an effective way to transform such
comparisons to posterior via fully-connected and SoftMax layers, so as to
permit end-to-end training in a deep network. Thanks to the efficient and
effective annotation approach, we collect a new large-scale facial age dataset,
dubbed `MegaAge', which consists of 41,941 images. Data can be downloaded from
our project page mmlab.ie.cuhk.edu.hk/projects/MegaAge and
github.com/zyx2012/Age_estimation_BMVC2017. With the dataset, we train a
network that jointly performs ordinal hyperplane classification and posterior
distribution learning. Our approach achieves state-of-the-art results on
popular benchmarks such as MORPH2, Adience, and the newly proposed MegaAge.Comment: To appear on BMVC 2017 (oral) revised versio
DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding
Human face exhibits an inherent hierarchy in its representations (i.e.,
holistic facial expressions can be encoded via a set of facial action units
(AUs) and their intensity). Variational (deep) auto-encoders (VAE) have shown
great results in unsupervised extraction of hierarchical latent representations
from large amounts of image data, while being robust to noise and other
undesired artifacts. Potentially, this makes VAEs a suitable approach for
learning facial features for AU intensity estimation. Yet, most existing
VAE-based methods apply classifiers learned separately from the encoded
features. By contrast, the non-parametric (probabilistic) approaches, such as
Gaussian Processes (GPs), typically outperform their parametric counterparts,
but cannot deal easily with large amounts of data. To this end, we propose a
novel VAE semi-parametric modeling framework, named DeepCoder, which combines
the modeling power of parametric (convolutional) and nonparametric (ordinal
GPs) VAEs, for joint learning of (1) latent representations at multiple levels
in a task hierarchy1, and (2) classification of multiple ordinal outputs. We
show on benchmark datasets for AU intensity estimation that the proposed
DeepCoder outperforms the state-of-the-art approaches, and related VAEs and
deep learning models.Comment: ICCV 2017 - accepte
- …