35,473 research outputs found
Improving Face Recognition from Caption Supervision with Multi-Granular Contextual Feature Aggregation
We introduce caption-guided face recognition (CGFR) as a new framework to
improve the performance of commercial-off-the-shelf (COTS) face recognition
(FR) systems. In contrast to combining soft biometrics (eg., facial marks,
gender, and age) with face images, in this work, we use facial descriptions
provided by face examiners as a piece of auxiliary information. However, due to
the heterogeneity of the modalities, improving the performance by directly
fusing the textual and facial features is very challenging, as both lie in
different embedding spaces. In this paper, we propose a contextual feature
aggregation module (CFAM) that addresses this issue by effectively exploiting
the fine-grained word-region interaction and global image-caption association.
Specifically, CFAM adopts a self-attention and a cross-attention scheme for
improving the intra-modality and inter-modality relationship between the image
and textual features, respectively. Additionally, we design a textual feature
refinement module (TFRM) that refines the textual features of the pre-trained
BERT encoder by updating the contextual embeddings. This module enhances the
discriminative power of textual features with a cross-modal projection loss and
realigns the word and caption embeddings with visual features by incorporating
a visual-semantic alignment loss. We implemented the proposed CGFR framework on
two face recognition models (ArcFace and AdaFace) and evaluated its performance
on the Multi-Modal CelebA-HQ dataset. Our framework significantly improves the
performance of ArcFace in both 1:1 verification and 1:N identification
protocol.Comment: This article has been accepted for publication in the IEEE
International Joint Conference on Biometrics (IJCB), 202
Improving Landmark Localization with Semi-Supervised Learning
We present two techniques to improve landmark localization in images from
partially annotated datasets. Our primary goal is to leverage the common
situation where precise landmark locations are only provided for a small data
subset, but where class labels for classification or regression tasks related
to the landmarks are more abundantly available. First, we propose the framework
of sequential multitasking and explore it here through an architecture for
landmark localization where training with class labels acts as an auxiliary
signal to guide the landmark localization on unlabeled data. A key aspect of
our approach is that errors can be backpropagated through a complete landmark
localization model. Second, we propose and explore an unsupervised learning
technique for landmark localization based on having a model predict equivariant
landmarks with respect to transformations applied to the image. We show that
these techniques, improve landmark prediction considerably and can learn
effective detectors even when only a small fraction of the dataset has landmark
labels. We present results on two toy datasets and four real datasets, with
hands and faces, and report new state-of-the-art on two datasets in the wild,
e.g. with only 5\% of labeled images we outperform previous state-of-the-art
trained on the AFLW dataset.Comment: Published as a conference paper in CVPR 201
Synthesizing Normalized Faces from Facial Identity Features
We present a method for synthesizing a frontal, neutral-expression image of a
person's face given an input face photograph. This is achieved by learning to
generate facial landmarks and textures from features extracted from a
facial-recognition network. Unlike previous approaches, our encoding feature
vector is largely invariant to lighting, pose, and facial expression.
Exploiting this invariance, we train our decoder network using only frontal,
neutral-expression photographs. Since these photographs are well aligned, we
can decompose them into a sparse set of landmark points and aligned texture
maps. The decoder then predicts landmarks and textures independently and
combines them using a differentiable image warping operation. The resulting
images can be used for a number of applications, such as analyzing facial
attributes, exposure and white balance adjustment, or creating a 3-D avatar
Learning a face space for experiments on human identity
Generative models of human identity and appearance have broad applicability
to behavioral science and technology, but the exquisite sensitivity of human
face perception means that their utility hinges on the alignment of the model's
representation to human psychological representations and the photorealism of
the generated images. Meeting these requirements is an exacting task, and
existing models of human identity and appearance are often unworkably abstract,
artificial, uncanny, or biased. Here, we use a variational autoencoder with an
autoregressive decoder to learn a face space from a uniquely diverse dataset of
portraits that control much of the variation irrelevant to human identity and
appearance. Our method generates photorealistic portraits of fictive identities
with a smooth, navigable latent space. We validate our model's alignment with
human sensitivities by introducing a psychophysical Turing test for images,
which humans mostly fail. Lastly, we demonstrate an initial application of our
model to the problem of fast search in mental space to obtain detailed "police
sketches" in a small number of trials.Comment: 10 figures. Accepted as a paper to the 40th Annual Meeting of the
Cognitive Science Society (CogSci 2018). *JWS and JCP contributed equally to
this submissio
- …