1,644 research outputs found
Robust Face Recognition with Structural Binary Gradient Patterns
This paper presents a computationally efficient yet powerful binary framework
for robust facial representation based on image gradients. It is termed as
structural binary gradient patterns (SBGP). To discover underlying local
structures in the gradient domain, we compute image gradients from multiple
directions and simplify them into a set of binary strings. The SBGP is derived
from certain types of these binary strings that have meaningful local
structures and are capable of resembling fundamental textural information. They
detect micro orientational edges and possess strong orientation and locality
capabilities, thus enabling great discrimination. The SBGP also benefits from
the advantages of the gradient domain and exhibits profound robustness against
illumination variations. The binary strategy realized by pixel correlations in
a small neighborhood substantially simplifies the computational complexity and
achieves extremely efficient processing with only 0.0032s in Matlab for a
typical face image. Furthermore, the discrimination power of the SBGP can be
enhanced on a set of defined orientational image gradient magnitudes, further
enforcing locality and orientation. Results of extensive experiments on various
benchmark databases illustrate significant improvements of the SBGP based
representations over the existing state-of-the-art local descriptors in the
terms of discrimination, robustness and complexity. Codes for the SBGP methods
will be available at
http://www.eee.manchester.ac.uk/research/groups/sisp/software/
A Survey on Periocular Biometrics Research
Periocular refers to the facial region in the vicinity of the eye, including
eyelids, lashes and eyebrows. While face and irises have been extensively
studied, the periocular region has emerged as a promising trait for
unconstrained biometrics, following demands for increased robustness of face or
iris systems. With a surprisingly high discrimination ability, this region can
be easily obtained with existing setups for face and iris, and the requirement
of user cooperation can be relaxed, thus facilitating the interaction with
biometric systems. It is also available over a wide range of distances even
when the iris texture cannot be reliably obtained (low resolution) or under
partial face occlusion (close distances). Here, we review the state of the art
in periocular biometrics research. A number of aspects are described,
including: i) existing databases, ii) algorithms for periocular detection
and/or segmentation, iii) features employed for recognition, iv) identification
of the most discriminative regions of the periocular area, v) comparison with
iris and face modalities, vi) soft-biometrics (gender/ethnicity
classification), and vii) impact of gender transformation and plastic surgery
on the recognition accuracy. This work is expected to provide an insight of the
most relevant issues in periocular biometrics, giving a comprehensive coverage
of the existing literature and current state of the art.Comment: Published in Pattern Recognition Letter
Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-related Applications
Facial expressions are an important way through which humans interact
socially. Building a system capable of automatically recognizing facial
expressions from images and video has been an intense field of study in recent
years. Interpreting such expressions remains challenging and much research is
needed about the way they relate to human affect. This paper presents a general
overview of automatic RGB, 3D, thermal and multimodal facial expression
analysis. We define a new taxonomy for the field, encompassing all steps from
face detection to facial expression recognition, and describe and classify the
state of the art methods accordingly. We also present the important datasets
and the bench-marking of most influential methods. We conclude with a general
discussion about trends, important questions and future lines of research
Deep Representation of Facial Geometric and Photometric Attributes for Automatic 3D Facial Expression Recognition
In this paper, we present a novel approach to automatic 3D Facial Expression
Recognition (FER) based on deep representation of facial 3D geometric and 2D
photometric attributes. A 3D face is firstly represented by its geometric and
photometric attributes, including the geometry map, normal maps, normalized
curvature map and texture map. These maps are then fed into a pre-trained deep
convolutional neural network to generate the deep representation. Then the
facial expression prediction is simplyachieved by training linear SVMs over the
deep representation for different maps and fusing these SVM scores. The
visualizations show that the deep representation provides a complete and highly
discriminative coding scheme for 3D faces. Comprehensive experiments on the
BU-3DFE database demonstrate that the proposed deep representation can
outperform the widely used hand-crafted descriptors (i.e., LBP, SIFT, HOG,
Gabor) and the state-of-art approaches under the same experimental protocols
Face Retrieval using Frequency Decoded Local Descriptor
The local descriptors have been the backbone of most of the computer vision
problems. Most of the existing local descriptors are generated over the raw
input images. In order to increase the discriminative power of the local
descriptors, some researchers converted the raw image into multiple images with
the help of some high and low pass frequency filters, then the local
descriptors are computed over each filtered image and finally concatenated into
a single descriptor. By doing so, these approaches do not utilize the inter
frequency relationship which causes the less improvement in the discriminative
power of the descriptor that could be achieved. In this paper, this problem is
solved by utilizing the decoder concept of multi-channel decoded local binary
pattern over the multi-frequency patterns. A frequency decoded local binary
pattern (FDLBP) is proposed with two decoders. Each decoder works with one low
frequency pattern and two high frequency patterns. Finally, the descriptors
from both decoders are concatenated to form the single descriptor. The face
retrieval experiments are conducted over four benchmarks and challenging
databases such as PaSC, LFW, PubFig, and ESSEX. The experimental results
confirm the superiority of the FDLBP descriptor as compared to the
state-of-the-art descriptors such as LBP, SOBEL_LBP, BoF_LBP, SVD_S_LBP, mdLBP,
etc.Comment: Accepted in Multimedia Tools and Applications, Springe
Facial expression recognition based on local region specific features and support vector machines
Facial expressions are one of the most powerful, natural and immediate means
for human being to communicate their emotions and intensions. Recognition of
facial expression has many applications including human-computer interaction,
cognitive science, human emotion analysis, personality development etc. In this
paper, we propose a new method for the recognition of facial expressions from
single image frame that uses combination of appearance and geometric features
with support vector machines classification. In general, appearance features
for the recognition of facial expressions are computed by dividing face region
into regular grid (holistic representation). But, in this paper we extracted
region specific appearance features by dividing the whole face region into
domain specific local regions. Geometric features are also extracted from
corresponding domain specific regions. In addition, important local regions are
determined by using incremental search approach which results in the reduction
of feature dimension and improvement in recognition accuracy. The results of
facial expressions recognition using features from domain specific regions are
also compared with the results obtained using holistic representation. The
performance of the proposed facial expression recognition system has been
validated on publicly available extended Cohn-Kanade (CK+) facial expression
data sets.Comment: Facial expressions, Local representation, Appearance features,
Geometric features, Support vector machine
Spontaneous Facial Micro-Expression Recognition using 3D Spatiotemporal Convolutional Neural Networks
Facial expression recognition in videos is an active area of research in
computer vision. However, fake facial expressions are difficult to be
recognized even by humans. On the other hand, facial micro-expressions
generally represent the actual emotion of a person, as it is a spontaneous
reaction expressed through human face. Despite of a few attempts made for
recognizing micro-expressions, still the problem is far from being a solved
problem, which is depicted by the poor rate of accuracy shown by the
state-of-the-art methods. A few CNN based approaches are found in the
literature to recognize micro-facial expressions from still images. Whereas, a
spontaneous micro-expression video contains multiple frames that have to be
processed together to encode both spatial and temporal information. This paper
proposes two 3D-CNN methods: MicroExpSTCNN and MicroExpFuseNet, for spontaneous
facial micro-expression recognition by exploiting the spatiotemporal
information in CNN framework. The MicroExpSTCNN considers the full spatial
information, whereas the MicroExpFuseNet is based on the 3D-CNN feature fusion
of the eyes and mouth regions. The experiments are performed over CAS(ME)^2 and
SMIC micro-expression databases. The proposed MicroExpSTCNN model outperforms
the state-of-the-art methods.Comment: Accepted in 2019 International Joint Conference on Neural Networks
(IJCNN
Vision-based Human Gender Recognition: A Survey
Gender is an important demographic attribute of people. This paper provides a
survey of human gender recognition in computer vision. A review of approaches
exploiting information from face and whole body (either from a still image or
gait sequence) is presented. We highlight the challenges faced and survey the
representative methods of these approaches. Based on the results, good
performance have been achieved for datasets captured under controlled
environments, but there is still much work that can be done to improve the
robustness of gender recognition under real-life environments.Comment: 30 page
LDOP: Local Directional Order Pattern for Robust Face Retrieval
The local descriptors have gained wide range of attention due to their
enhanced discriminative abilities. It has been proved that the consideration of
multi-scale local neighborhood improves the performance of the descriptor,
though at the cost of increased dimension. This paper proposes a novel method
to construct a local descriptor using multi-scale neighborhood by finding the
local directional order among the intensity values at different scales in a
particular direction. Local directional order is the multi-radius relationship
factor in a particular direction. The proposed local directional order pattern
(LDOP) for a particular pixel is computed by finding the relationship between
the center pixel and local directional order indexes. It is required to
transform the center value into the range of neighboring orders. Finally, the
histogram of LDOP is computed over whole image to construct the descriptor. In
contrast to the state-of-the-art descriptors, the dimension of the proposed
descriptor does not depend upon the number of neighbors involved to compute the
order; it only depends upon the number of directions. The introduced descriptor
is evaluated over the image retrieval framework and compared with the
state-of-the-art descriptors over challenging face databases such as PaSC, LFW,
PubFig, FERET, AR, AT&T, and ExtendedYale. The experimental results confirm the
superiority and robustness of the LDOP descriptor.Comment: Published in Multimedia Tools and Applications, Springe
Spatiotemporal Recurrent Convolutional Networks for Recognizing Spontaneous Micro-expressions
Recently, the recognition task of spontaneous facial micro-expressions has
attracted much attention with its various real-world applications. Plenty of
handcrafted or learned features have been employed for a variety of classifiers
and achieved promising performances for recognizing micro-expressions. However,
the micro-expression recognition is still challenging due to the subtle
spatiotemporal changes of micro-expressions. To exploit the merits of deep
learning, we propose a novel deep recurrent convolutional networks based
micro-expression recognition approach, capturing the spatial-temporal
deformations of micro-expression sequence. Specifically, the proposed deep
model is constituted of several recurrent convolutional layers for extracting
visual features and a classificatory layer for recognition. It is optimized by
an end-to-end manner and obviates manual feature design. To handle sequential
data, we exploit two types of extending the connectivity of convolutional
networks across temporal domain, in which the spatiotemporal deformations are
modeled in views of facial appearance and geometry separately. Besides, to
overcome the shortcomings of limited and imbalanced training samples, temporal
data augmentation strategies as well as a balanced loss are jointly used for
our deep network. By performing the experiments on three spontaneous
micro-expression datasets, we verify the effectiveness of our proposed
micro-expression recognition approach compared to the state-of-the-art methods.Comment: Submitted to IEEE TM
- …