2,993 research outputs found
Efficient illumination independent appearance-based face tracking
One of the major challenges that visual tracking algorithms face nowadays is being
able to cope with changes in the appearance of the target during tracking. Linear
subspace models have been extensively studied and are possibly the most popular
way of modelling target appearance. We introduce a linear subspace representation
in which the appearance of a face is represented by the addition of two approxi-
mately independent linear subspaces modelling facial expressions and illumination
respectively. This model is more compact than previous bilinear or multilinear ap-
proaches. The independence assumption notably simplifies system training. We only
require two image sequences. One facial expression is subject to all possible illumina-
tions in one sequence and the face adopts all facial expressions under one particular
illumination in the other. This simple model enables us to train the system with
no manual intervention. We also revisit the problem of efficiently fitting a linear
subspace-based model to a target image and introduce an additive procedure for
solving this problem. We prove that Matthews and Baker’s Inverse Compositional
Approach makes a smoothness assumption on the subspace basis that is equiva-
lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs
from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap-
proaches in that we make no smoothness assumptions on the subspace basis. In the
experiments conducted we show that the model introduced accurately represents
the appearance variations caused by illumination changes and facial expressions.
We also verify experimentally that our fitting procedure is more accurate and has
better convergence rate than the other related approaches, albeit at the expense of
a slight increase in computational cost. Our approach can be used for tracking a
human face at standard video frame rates on an average personal computer
Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression
We present techniques for improving performance driven facial animation,
emotion recognition, and facial key-point or landmark prediction using learned
identity invariant representations. Established approaches to these problems
can work well if sufficient examples and labels for a particular identity are
available and factors of variation are highly controlled. However, labeled
examples of facial expressions, emotions and key-points for new individuals are
difficult and costly to obtain. In this paper we improve the ability of
techniques to generalize to new and unseen individuals by explicitly modeling
previously seen variations related to identity and expression. We use a
weakly-supervised approach in which identity labels are used to learn the
different factors of variation linked to identity separately from factors
related to expression. We show how probabilistic modeling of these sources of
variation allows one to learn identity-invariant representations for
expressions which can then be used to identity-normalize various procedures for
facial expression analysis and animation control. We also show how to extend
the widely used techniques of active appearance models and constrained local
models through replacing the underlying point distribution models which are
typically constructed using principal component analysis with
identity-expression factorized representations. We present a wide variety of
experiments in which we consistently improve performance on emotion
recognition, markerless performance-driven facial animation and facial
key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
Error Correction for Dense Semantic Image Labeling
Pixelwise semantic image labeling is an important, yet challenging, task with
many applications. Typical approaches to tackle this problem involve either the
training of deep networks on vast amounts of images to directly infer the
labels or the use of probabilistic graphical models to jointly model the
dependencies of the input (i.e. images) and output (i.e. labels). Yet, the
former approaches do not capture the structure of the output labels, which is
crucial for the performance of dense labeling, and the latter rely on carefully
hand-designed priors that require costly parameter tuning via optimization
techniques, which in turn leads to long inference times. To alleviate these
restrictions, we explore how to arrive at dense semantic pixel labels given
both the input image and an initial estimate of the output labels. We propose a
parallel architecture that: 1) exploits the context information through a
LabelPropagation network to propagate correct labels from nearby pixels to
improve the object boundaries, 2) uses a LabelReplacement network to directly
replace possibly erroneous, initial labels with new ones, and 3) combines the
different intermediate results via a Fusion network to obtain the final
per-pixel label. We experimentally validate our approach on two different
datasets for the semantic segmentation and face parsing tasks respectively,
where we show improvements over the state-of-the-art. We also provide both a
quantitative and qualitative analysis of the generated results
One-to-many face recognition with bilinear CNNs
The recent explosive growth in convolutional neural network (CNN) research
has produced a variety of new architectures for deep learning. One intriguing
new architecture is the bilinear CNN (B-CNN), which has shown dramatic
performance gains on certain fine-grained recognition problems [15]. We apply
this new CNN to the challenging new face recognition benchmark, the IARPA Janus
Benchmark A (IJB-A) [12]. It features faces from a large number of identities
in challenging real-world conditions. Because the face images were not
identified automatically using a computerized face detection system, it does
not have the bias inherent in such a database. We demonstrate the performance
of the B-CNN model beginning from an AlexNet-style network pre-trained on
ImageNet. We then show results for fine-tuning using a moderate-sized and
public external database, FaceScrub [17]. We also present results with
additional fine-tuning on the limited training data provided by the protocol.
In each case, the fine-tuned bilinear model shows substantial improvements over
the standard CNN. Finally, we demonstrate how a standard CNN pre-trained on a
large face database, the recently released VGG-Face model [20], can be
converted into a B-CNN without any additional feature training. This B-CNN
improves upon the CNN performance on the IJB-A benchmark, achieving 89.5%
rank-1 recall.Comment: Published version at WACV 201
Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition
We present a unified framework for understanding human social behaviors in
raw image sequences. Our model jointly detects multiple individuals, infers
their social actions, and estimates the collective actions with a single
feed-forward pass through a neural network. We propose a single architecture
that does not rely on external detection algorithms but rather is trained
end-to-end to generate dense proposal maps that are refined via a novel
inference scheme. The temporal consistency is handled via a person-level
matching Recurrent Neural Network. The complete model takes as input a sequence
of frames and outputs detections along with the estimates of individual actions
and collective activities. We demonstrate state-of-the-art performance of our
algorithm on multiple publicly available benchmarks
- …