12,690 research outputs found
In-the-wild Facial Expression Recognition in Extreme Poses
In the computer research area, facial expression recognition is a hot
research problem. Recent years, the research has moved from the lab environment
to in-the-wild circumstances. It is challenging, especially under extreme
poses. But current expression detection systems are trying to avoid the pose
effects and gain the general applicable ability. In this work, we solve the
problem in the opposite approach. We consider the head poses and detect the
expressions within special head poses. Our work includes two parts: detect the
head pose and group it into one pre-defined head pose class; do facial
expression recognize within each pose class. Our experiments show that the
recognition results with pose class grouping are much better than that of
direct recognition without considering poses. We combine the hand-crafted
features, SIFT, LBP and geometric feature, with deep learning feature as the
representation of the expressions. The handcrafted features are added into the
deep learning framework along with the high level deep learning features. As a
comparison, we implement SVM and random forest to as the prediction models. To
train and test our methodology, we labeled the face dataset with 6 basic
expressions.Comment: Published on ICGIP201
CNN-based Real-time Dense Face Reconstruction with Inverse-rendered Photo-realistic Face Images
With the powerfulness of convolution neural networks (CNN), CNN based face
reconstruction has recently shown promising performance in reconstructing
detailed face shape from 2D face images. The success of CNN-based methods
relies on a large number of labeled data. The state-of-the-art synthesizes such
data using a coarse morphable face model, which however has difficulty to
generate detailed photo-realistic images of faces (with wrinkles). This paper
presents a novel face data generation method. Specifically, we render a large
number of photo-realistic face images with different attributes based on
inverse rendering. Furthermore, we construct a fine-detailed face image dataset
by transferring different scales of details from one image to another. We also
construct a large number of video-type adjacent frame pairs by simulating the
distribution of real video data. With these nicely constructed datasets, we
propose a coarse-to-fine learning framework consisting of three convolutional
networks. The networks are trained for real-time detailed 3D face
reconstruction from monocular video as well as from a single image. Extensive
experimental results demonstrate that our framework can produce high-quality
reconstruction but with much less computation time compared to the
state-of-the-art. Moreover, our method is robust to pose, expression and
lighting due to the diversity of data.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence, 201
Fully Automatic Expression-Invariant Face Correspondence
We consider the problem of computing accurate point-to-point correspondences
among a set of human face scans with varying expressions. Our fully automatic
approach does not require any manually placed markers on the scan. Instead, the
approach learns the locations of a set of landmarks present in a database and
uses this knowledge to automatically predict the locations of these landmarks
on a newly available scan. The predicted landmarks are then used to compute
point-to-point correspondences between a template model and the newly available
scan. To accurately fit the expression of the template to the expression of the
scan, we use as template a blendshape model. Our algorithm was tested on a
database of human faces of different ethnic groups with strongly varying
expressions. Experimental results show that the obtained point-to-point
correspondence is both highly accurate and consistent for most of the tested 3D
face models
Visibility Constrained Generative Model for Depth-based 3D Facial Pose Tracking
In this paper, we propose a generative framework that unifies depth-based 3D
facial pose tracking and face model adaptation on-the-fly, in the unconstrained
scenarios with heavy occlusions and arbitrary facial expression variations.
Specifically, we introduce a statistical 3D morphable model that flexibly
describes the distribution of points on the surface of the face model, with an
efficient switchable online adaptation that gradually captures the identity of
the tracked subject and rapidly constructs a suitable face model when the
subject changes. Moreover, unlike prior art that employed ICP-based facial pose
estimation, to improve robustness to occlusions, we propose a ray visibility
constraint that regularizes the pose based on the face model's visibility with
respect to the input point cloud. Ablation studies and experimental results on
Biwi and ICT-3DHP datasets demonstrate that the proposed framework is effective
and outperforms completing state-of-the-art depth-based methods
Effective Face Frontalization in Unconstrained Images
"Frontalization" is the process of synthesizing frontal facing views of faces
appearing in single unconstrained photos. Recent reports have suggested that
this process may substantially boost the performance of face recognition
systems. This, by transforming the challenging problem of recognizing faces
viewed from unconstrained viewpoints to the easier problem of recognizing faces
in constrained, forward facing poses. Previous frontalization methods did this
by attempting to approximate 3D facial shapes for each query image. We observe
that 3D face shape estimation from unconstrained photos may be a harder problem
than frontalization and can potentially introduce facial misalignments.
Instead, we explore the simpler approach of using a single, unmodified, 3D
surface as an approximation to the shape of all input faces. We show that this
leads to a straightforward, efficient and easy to implement method for
frontalization. More importantly, it produces aesthetic new frontal views and
is surprisingly effective when used for face recognition and gender estimation
- …