10,663 research outputs found
Recognising facial expressions in video sequences
We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real-time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated to facial expressions are represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold in order to compute a posterior probability associated to a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89\% recognition rate in a set of 333 sequences from the Cohn-Kanade data base
Efficient illumination independent appearance-based face tracking
One of the major challenges that visual tracking algorithms face nowadays is being
able to cope with changes in the appearance of the target during tracking. Linear
subspace models have been extensively studied and are possibly the most popular
way of modelling target appearance. We introduce a linear subspace representation
in which the appearance of a face is represented by the addition of two approxi-
mately independent linear subspaces modelling facial expressions and illumination
respectively. This model is more compact than previous bilinear or multilinear ap-
proaches. The independence assumption notably simplifies system training. We only
require two image sequences. One facial expression is subject to all possible illumina-
tions in one sequence and the face adopts all facial expressions under one particular
illumination in the other. This simple model enables us to train the system with
no manual intervention. We also revisit the problem of efficiently fitting a linear
subspace-based model to a target image and introduce an additive procedure for
solving this problem. We prove that Matthews and Baker’s Inverse Compositional
Approach makes a smoothness assumption on the subspace basis that is equiva-
lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs
from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap-
proaches in that we make no smoothness assumptions on the subspace basis. In the
experiments conducted we show that the model introduced accurately represents
the appearance variations caused by illumination changes and facial expressions.
We also verify experimentally that our fitting procedure is more accurate and has
better convergence rate than the other related approaches, albeit at the expense of
a slight increase in computational cost. Our approach can be used for tracking a
human face at standard video frame rates on an average personal computer
Age Progression and Regression with Spatial Attention Modules
Age progression and regression refers to aesthetically render-ing a given
face image to present effects of face aging and rejuvenation, respectively.
Although numerous studies have been conducted in this topic, there are two
major problems: 1) multiple models are usually trained to simulate different
age mappings, and 2) the photo-realism of generated face images is heavily
influenced by the variation of training images in terms of pose, illumination,
and background. To address these issues, in this paper, we propose a framework
based on conditional Generative Adversarial Networks (cGANs) to achieve age
progression and regression simultaneously. Particularly, since face aging and
rejuvenation are largely different in terms of image translation patterns, we
model these two processes using two separate generators, each dedicated to one
age changing process. In addition, we exploit spatial attention mechanisms to
limit image modifications to regions closely related to age changes, so that
images with high visual fidelity could be synthesized for in-the-wild cases.
Experiments on multiple datasets demonstrate the ability of our model in
synthesizing lifelike face images at desired ages with personalized features
well preserved, and keeping age-irrelevant regions unchanged
Cephalometric studies of the mandible, its masticatory muscles and vasculature of growing Göttingen Minipigs — A comparative anatomical study to refine experimental mandibular surgery
Over many decades, the Göttingen Minipig has been used as a large animal model in experimental surgical research of the mandible. Recently several authors have raised concerns over the use of the Göttingen Minipig in this research area, observing problems with post-operative wound healing and loosening implants. To reduce these complications during and after surgery and to improve animal welfare in mandibular surgery research, the present study elucidated how comparable the mandible of minipigs is to that of humans and whether these complications could be caused by specific anatomical characteristics of the minipigs’ mandible, its masticatory muscles and associated vasculature. Twenty-two mandibular cephalometric parameters were measured on CT scans of Göttingen Minipigs aged between 12 and 21 months. Ultimately, we compared this data with human data reported in the scientific literature. In addition, image segmentation was used to determine the masticatory muscle morphology and the configuration of the mandibular blood vessels. Compared to data of humans, significant differences in the mandibular anatomy of minipigs were found. Of the 22 parameters measured only four were found to be highly comparable, whilst the others were not. The 3D examinations of the minipigs vasculature showed a very prominent deep facial vein directly medial to the mandibular ramus and potentially interfering with the sectional plane of mandibular distraction osteogenesis. Damage to this vessel could result in inaccessible bleeding. The findings of this study suggest that Göttingen Minipigs are not ideal animal models for experimental mandibular surgery research. Nevertheless if these minipigs are used the authors recommend that radiographic techniques, such as computed tomography, be used in the specific planning procedures for the mandibular surgical experiments. In addition, it is advisable to choose suitable age groups and customize implants based on the mandibular dimensions reported in this study
SCULPTOR: Skeleton-Consistent Face Creation Using a Learned Parametric Generator
Recent years have seen growing interest in 3D human faces modelling due to
its wide applications in digital human, character generation and animation.
Existing approaches overwhelmingly emphasized on modeling the exterior shapes,
textures and skin properties of faces, ignoring the inherent correlation
between inner skeletal structures and appearance. In this paper, we present
SCULPTOR, 3D face creations with Skeleton Consistency Using a Learned
Parametric facial generaTOR, aiming to facilitate easy creation of both
anatomically correct and visually convincing face models via a hybrid
parametric-physical representation. At the core of SCULPTOR is LUCY, the first
large-scale shape-skeleton face dataset in collaboration with plastic surgeons.
Named after the fossils of one of the oldest known human ancestors, our LUCY
dataset contains high-quality Computed Tomography (CT) scans of the complete
human head before and after orthognathic surgeries, critical for evaluating
surgery results. LUCY consists of 144 scans of 72 subjects (31 male and 41
female) where each subject has two CT scans taken pre- and post-orthognathic
operations. Based on our LUCY dataset, we learn a novel skeleton consistent
parametric facial generator, SCULPTOR, which can create the unique and nuanced
facial features that help define a character and at the same time maintain
physiological soundness. Our SCULPTOR jointly models the skull, face geometry
and face appearance under a unified data-driven framework, by separating the
depiction of a 3D face into shape blend shape, pose blend shape and facial
expression blend shape. SCULPTOR preserves both anatomic correctness and visual
realism in facial generation tasks compared with existing methods. Finally, we
showcase the robustness and effectiveness of SCULPTOR in various fancy
applications unseen before.Comment: 16 page, 13 fig
- …