11,264 research outputs found
Facial expression recognition based on local region specific features and support vector machines
Facial expressions are one of the most powerful, natural and immediate means
for human being to communicate their emotions and intensions. Recognition of
facial expression has many applications including human-computer interaction,
cognitive science, human emotion analysis, personality development etc. In this
paper, we propose a new method for the recognition of facial expressions from
single image frame that uses combination of appearance and geometric features
with support vector machines classification. In general, appearance features
for the recognition of facial expressions are computed by dividing face region
into regular grid (holistic representation). But, in this paper we extracted
region specific appearance features by dividing the whole face region into
domain specific local regions. Geometric features are also extracted from
corresponding domain specific regions. In addition, important local regions are
determined by using incremental search approach which results in the reduction
of feature dimension and improvement in recognition accuracy. The results of
facial expressions recognition using features from domain specific regions are
also compared with the results obtained using holistic representation. The
performance of the proposed facial expression recognition system has been
validated on publicly available extended Cohn-Kanade (CK+) facial expression
data sets.Comment: Facial expressions, Local representation, Appearance features,
Geometric features, Support vector machine
Face Recognition in Low Quality Images: A Survey
Low-resolution face recognition (LRFR) has received increasing attention over
the past few years. Its applications lie widely in the real-world environment
when high-resolution or high-quality images are hard to capture. One of the
biggest demands for LRFR technologies is video surveillance. As the the number
of surveillance cameras in the city increases, the videos that captured will
need to be processed automatically. However, those videos or images are usually
captured with large standoffs, arbitrary illumination condition, and diverse
angles of view. Faces in these images are generally small in size. Several
studies addressed this problem employed techniques like super resolution,
deblurring, or learning a relationship between different resolution domains. In
this paper, we provide a comprehensive review of approaches to low-resolution
face recognition in the past five years. First, a general problem definition is
given. Later, systematically analysis of the works on this topic is presented
by catogory. In addition to describing the methods, we also focus on datasets
and experiment settings. We further address the related works on unconstrained
low-resolution face recognition and compare them with the result that use
synthetic low-resolution data. Finally, we summarized the general limitations
and speculate a priorities for the future effort.Comment: There are some mistakes addressing in this paper which will be
misleading to the reader and we wont have a new version in short time. We
will resubmit once it is being corecte
A Survey of the Trends in Facial and Expression Recognition Databases and Methods
Automated facial identification and facial expression recognition have been
topics of active research over the past few decades. Facial and expression
recognition find applications in human-computer interfaces, subject tracking,
real-time security surveillance systems and social networking. Several holistic
and geometric methods have been developed to identify faces and expressions
using public and local facial image databases. In this work we present the
evolution in facial image data sets and the methodologies for facial
identification and recognition of expressions such as anger, sadness,
happiness, disgust, fear and surprise. We observe that most of the earlier
methods for facial and expression recognition aimed at improving the
recognition rates for facial feature-based methods using static images.
However, the recent methodologies have shifted focus towards robust
implementation of facial/expression recognition from large image databases that
vary with space (gathered from the internet) and time (video recordings). The
evolution trends in databases and methodologies for facial and expression
recognition can be useful for assessing the next-generation topics that may
have applications in security systems or personal identification systems that
involve "Quantitative face" assessments.Comment: 16 pages, 4 figures, 3 tables, International Journal of Computer
Science and Engineering Survey, October, 201
Face Identification with Second-Order Pooling
Automatic face recognition has received significant performance improvement
by developing specialised facial image representations. On the other hand,
generic object recognition has rarely been applied to the face recognition.
Spatial pyramid pooling of features encoded by an over-complete dictionary has
been the key component of many state-of-the-art image classification systems.
Inspired by its success, in this work we develop a new face image
representation method inspired by the second-order pooling in Carreira et al.
[1], which was originally proposed for image segmentation. The proposed method
differs from the previous methods in that, we encode the densely extracted
local patches by a small-size dictionary; and the facial image signatures are
obtained by pooling the second-order statistics of the encoded features. We
show the importance of pooling on encoded features, which is bypassed by the
original second-order pooling method to avoid the high computational cost.
Equipped with a simple linear classifier, the proposed method outperforms the
state-of-the-art face identification performance by large margins. For example,
on the LFW databases, the proposed method performs better than the previous
best by around 13% accuracy.Comment: 9 page
Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-related Applications
Facial expressions are an important way through which humans interact
socially. Building a system capable of automatically recognizing facial
expressions from images and video has been an intense field of study in recent
years. Interpreting such expressions remains challenging and much research is
needed about the way they relate to human affect. This paper presents a general
overview of automatic RGB, 3D, thermal and multimodal facial expression
analysis. We define a new taxonomy for the field, encompassing all steps from
face detection to facial expression recognition, and describe and classify the
state of the art methods accordingly. We also present the important datasets
and the bench-marking of most influential methods. We conclude with a general
discussion about trends, important questions and future lines of research
Face Recognition: A Novel Multi-Level Taxonomy based Survey
In a world where security issues have been gaining growing importance, face
recognition systems have attracted increasing attention in multiple application
areas, ranging from forensics and surveillance to commerce and entertainment.
To help understanding the landscape and abstraction levels relevant for face
recognition systems, face recognition taxonomies allow a deeper dissection and
comparison of the existing solutions. This paper proposes a new, more
encompassing and richer multi-level face recognition taxonomy, facilitating the
organization and categorization of available and emerging face recognition
solutions; this taxonomy may also guide researchers in the development of more
efficient face recognition solutions. The proposed multi-level taxonomy
considers levels related to the face structure, feature support and feature
extraction approach. Following the proposed taxonomy, a comprehensive survey of
representative face recognition solutions is presented. The paper concludes
with a discussion on current algorithmic and application related challenges
which may define future research directions for face recognition.Comment: This paper is a preprint of a paper submitted to IET Biometrics. If
accepted, the copy of record will be available at the IET Digital Librar
Deep Facial Expression Recognition: A Survey
With the transition of facial expression recognition (FER) from
laboratory-controlled to challenging in-the-wild conditions and the recent
success of deep learning techniques in various fields, deep neural networks
have increasingly been leveraged to learn discriminative representations for
automatic FER. Recent deep FER systems generally focus on two important issues:
overfitting caused by a lack of sufficient training data and
expression-unrelated variations, such as illumination, head pose and identity
bias. In this paper, we provide a comprehensive survey on deep FER, including
datasets and algorithms that provide insights into these intrinsic problems.
First, we describe the standard pipeline of a deep FER system with the related
background knowledge and suggestions of applicable implementations for each
stage. We then introduce the available datasets that are widely used in the
literature and provide accepted data selection and evaluation principles for
these datasets. For the state of the art in deep FER, we review existing novel
deep neural networks and related training strategies that are designed for FER
based on both static images and dynamic image sequences, and discuss their
advantages and limitations. Competitive performances on widely used benchmarks
are also summarized in this section. We then extend our survey to additional
related issues and application scenarios. Finally, we review the remaining
challenges and corresponding opportunities in this field as well as future
directions for the design of robust deep FER systems
Dynamic Pose-Robust Facial Expression Recognition by Multi-View Pairwise Conditional Random Forests
Automatic facial expression classification (FER) from videos is a critical
problem for the development of intelligent human-computer interaction systems.
Still, it is a challenging problem that involves capturing high-dimensional
spatio-temporal patterns describing the variation of one's appearance over
time. Such representation undergoes great variability of the facial morphology
and environmental factors as well as head pose variations. In this paper, we
use Conditional Random Forests to capture low-level expression transition
patterns. More specifically, heterogeneous derivative features (e.g. feature
point movements or texture variations) are evaluated upon pairs of images. When
testing on a video frame, pairs are created between this current frame and
previous ones and predictions for each previous frame are used to draw trees
from Pairwise Conditional Random Forests (PCRF) whose pairwise outputs are
averaged over time to produce robust estimates. Moreover, PCRF collections can
also be conditioned on head pose estimation for multi-view dynamic FER. As
such, our approach appears as a natural extension of Random Forests for
learning spatio-temporal patterns, potentially from multiple viewpoints.
Experiments on popular datasets show that our method leads to significant
improvements over standard Random Forests as well as state-of-the-art
approaches on several scenarios, including a novel multi-view video corpus
generated from a publicly available database.Comment: Extension of an ICCV 2015 pape
Learning Deep Representation for Face Alignment with Auxiliary Attributes
In this study, we show that landmark detection or face alignment task is not
a single and independent problem. Instead, its robustness can be greatly
improved with auxiliary information. Specifically, we jointly optimize landmark
detection together with the recognition of heterogeneous but subtly correlated
facial attributes, such as gender, expression, and appearance attributes. This
is non-trivial since different attribute inference tasks have different
learning difficulties and convergence rates. To address this problem, we
formulate a novel tasks-constrained deep model, which not only learns the
inter-task correlation but also employs dynamic task coefficients to facilitate
the optimization convergence when learning multiple complex tasks. Extensive
evaluations show that the proposed task-constrained learning (i) outperforms
existing face alignment methods, especially in dealing with faces with severe
occlusion and pose variation, and (ii) reduces model complexity drastically
compared to the state-of-the-art methods based on cascaded deep model.Comment: to be published in the IEEE Transactions on Pattern Analysis and
Machine Intelligence (TPAMI
Deep 3D Face Identification
We propose a novel 3D face recognition algorithm using a deep convolutional
neural network (DCNN) and a 3D augmentation technique. The performance of 2D
face recognition algorithms has significantly increased by leveraging the
representational power of deep neural networks and the use of large-scale
labeled training data. As opposed to 2D face recognition, training
discriminative deep features for 3D face recognition is very difficult due to
the lack of large-scale 3D face datasets. In this paper, we show that transfer
learning from a CNN trained on 2D face images can effectively work for 3D face
recognition by fine-tuning the CNN with a relatively small number of 3D facial
scans. We also propose a 3D face augmentation technique which synthesizes a
number of different facial expressions from a single 3D face scan. Our proposed
method shows excellent recognition results on Bosphorus, BU-3DFE, and 3D-TEC
datasets, without using hand-crafted features. The 3D identification using our
deep features also scales well for large databases.Comment: 9 pages, 5 figures, 2 table
- …