380 research outputs found
Facial expression transfer method based on frequency analysis
We propose a novel expression transfer method based on an analysis of the frequency of multi-expression facial images. We locate the facial features automatically and describe the shape deformations between a neutral expression and non-neutral expressions. The subtle expression changes are important visual clues to distinguish different expressions. These changes are more salient in the frequency domain than in the image domain. We extract the subtle local expression deformations for the source subject, coded in the wavelet decomposition. This information about expressions is transferred to a target subject. The resulting synthesized image preserves both the facial appearance of the target subject and the expression details of the source subject. This method is extended to dynamic expression transfer to allow a more precise interpretation of facial expressions. Experiments on Japanese Female Facial Expression (JAFFE), the extended Cohn-Kanade (CK+) and PIE facial expression databases show the superiority of our method over the state-of-the-art method
Facial Data Classification Through Enhanced Local Binary Patterns (LBP) and Dynamic Range Local Binary Patterns (DRLBP) Algorithms
A significant amount of reliance is placed on facial data classification in contemporary computer vision and pattern recognition. This research presents a novel method that makes use of the algorithms of Dynamic Range Local Binary Patterns (DRLBP) and Enhanced Local Binary Patterns for the purpose of face data classification that is both effective and precise (LBP). The classic LBP methodology is expanded upon by the Enhanced LBP method, which incorporates adaptive thresholding techniques and spatial histogram characteristics. This makes it possible to conduct a more thorough investigation of the texture and to be resilient in a variety of lighting conditions. By continuously modifying the local binary pattern range, the DRLBP algorithm improves upon this in order to better accommodate nuanced facial features and expressions. This is done in order to better accommodate facial expressions. In terms of accuracy, speed, and adaptability, our proposed system beats state-of-the-art alternatives, as demonstrated by extensive trials conducted on commonly used facial datasets. According to the findings of our investigation, it would appear that human-computer interaction (HCI), digital forensics, and security systems could all stand to gain a great deal from a solution that combines Enhanced LBP and DRLBP algorithms for the classification of face data
Subspace Representations for Robust Face and Facial Expression Recognition
Analyzing human faces and modeling their variations have always been of interest to the computer vision community. Face analysis based on 2D intensity images is a challenging problem, complicated by variations in pose, lighting, blur, and non-rigid facial deformations due to facial expressions. Among the different sources of variation, facial expressions are of interest as important channels of non-verbal communication. Facial expression analysis is also affected by changes in view-point and inter-subject variations in performing different expressions. This dissertation makes an attempt to address some of the challenges involved in developing robust algorithms for face and facial expression recognition by exploiting the idea of proper subspace representations for data.
Variations in the visual appearance of an object mostly arise due to changes in illumination and pose. So we first present a video-based sequential algorithm for estimating the face albedo as an illumination-insensitive signature for face recognition. We show that by knowing/estimating the pose of the face at each frame of a sequence, the albedo can be efficiently estimated using a Kalman filter. Then we extend this to the case of unknown pose by simultaneously tracking the pose as well as updating the albedo through an efficient Bayesian inference method performed using a Rao-Blackwellized particle filter.
Since understanding the effects of blur, especially motion blur, is an important problem in unconstrained visual analysis, we then propose a blur-robust recognition algorithm for faces with spatially varying blur. We model a blurred face as a weighted average of geometrically transformed instances of its clean face. We then build a matrix, for each gallery face, whose column space spans the space of all the motion blurred images obtained from the clean face. This matrix representation is then used to define a proper objective function and perform blur-robust face recognition.
To develop robust and generalizable models for expression analysis one needs to break the dependence of the models on the choice of the coordinate frame of the camera. To this end, we build models for expressions on the affine shape-space (Grassmann manifold), as an approximation to the projective shape-space, by using a Riemannian interpretation of deformations that facial expressions cause on different parts of the face. This representation enables us to perform various expression analysis and recognition algorithms without the need for pose normalization as a preprocessing step.
There is a large degree of inter-subject variations in performing various expressions. This poses an important challenge on developing robust facial expression recognition algorithms. To address this challenge, we propose a dictionary-based approach for facial expression analysis by decomposing expressions in terms of action units (AUs). First, we construct an AU-dictionary using domain experts' knowledge of AUs. To incorporate the high-level knowledge regarding expression decomposition and AUs, we then perform structure-preserving sparse coding by imposing two layers of grouping over AU-dictionary atoms as well as over the test image matrix columns. We use the computed sparse code matrix for each expressive face to perform expression decomposition and recognition.
Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. We propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component, which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm, which benefits from the idea of sparsity and morphological diversity. The DCS algorithm uses the data-driven dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition
Automatic construction of robust spherical harmonic subspaces
In this paper we propose a method to automatically recover a class specific low dimensional spherical harmonic basis from a set of in-the-wild facial images. We combine existing techniques for uncalibrated photometric stereo and low rank matrix decompositions in order to robustly recover a combined model of shape and identity. We build this basis without aid from a 3D model and show how it can be combined with recent efficient sparse facial feature localisation techniques to recover dense 3D facial shape. Unlike previous works in the area, our method is very efficient and is an order of magnitude faster to train, taking only a few minutes to build a model with over 2000 images. Furthermore, it can be used for real-time recovery of facial shape
3d Face Reconstruction And Emotion Analytics With Part-Based Morphable Models
3D face reconstruction and facial expression analytics using 3D facial data are new
and hot research topics in computer graphics and computer vision. In this proposal, we first
review the background knowledge for emotion analytics using 3D morphable face model, including
geometry feature-based methods, statistic model-based methods and more advanced
deep learning-bade methods. Then, we introduce a novel 3D face modeling and reconstruction
solution that robustly and accurately acquires 3D face models from a couple of images
captured by a single smartphone camera. Two selfie photos of a subject taken from the
front and side are used to guide our Non-Negative Matrix Factorization (NMF) induced
part-based face model to iteratively reconstruct an initial 3D face of the subject. Then, an
iterative detail updating method is applied to the initial generated 3D face to reconstruct
facial details through optimizing lighting parameters and local depths. Our iterative 3D
face reconstruction method permits fully automatic registration of a part-based face representation
to the acquired face data and the detailed 2D/3D features to build a high-quality
3D face model. The NMF part-based face representation learned from a 3D face database
facilitates effective global and adaptive local detail data fitting alternatively. Our system
is flexible and it allows users to conduct the capture in any uncontrolled environment. We
demonstrate the capability of our method by allowing users to capture and reconstruct their
3D faces by themselves.
Based on the 3D face model reconstruction, we can analyze the facial expression and
the related emotion in 3D space. We present a novel approach to analyze the facial expressions
from images and a quantitative information visualization scheme for exploring this
type of visual data. From the reconstructed result using NMF part-based morphable 3D face
model, basis parameters and a displacement map are extracted as features for facial emotion
analysis and visualization. Based upon the features, two Support Vector Regressions (SVRs)
are trained to determine the fuzzy Valence-Arousal (VA) values to quantify the emotions.
The continuously changing emotion status can be intuitively analyzed by visualizing the
VA values in VA-space. Our emotion analysis and visualization system, based on 3D NMF
morphable face model, detects expressions robustly from various head poses, face sizes and
lighting conditions, and is fully automatic to compute the VA values from images or a sequence
of video with various facial expressions. To evaluate our novel method, we test our
system on publicly available databases and evaluate the emotion analysis and visualization
results. We also apply our method to quantifying emotion changes during motivational interviews.
These experiments and applications demonstrate effectiveness and accuracy of
our method.
In order to improve the expression recognition accuracy, we present a facial expression
recognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visual
analytics guided 3DMCNN design and optimization scheme. The geometric properties of the
surface is computed using the 3D face model of a subject with facial expressions. Instead of
using regular Convolutional Neural Network (CNN) to learn intensities of the facial images,
we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We
design a geodesic distance-based convolution method to overcome the difficulties raised from
the irregular sampling of the face surface mesh. We further present an interactive visual
analytics for the purpose of designing and modifying the networks to analyze the learned
features and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network,
the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and
analyze the effectiveness of our method by studying representative cases. Testing on public
datasets, our method achieves a higher recognition accuracy than traditional image-based
CNN and other 3D CNNs. The presented framework, including 3DMCNN and interactive
visual analytics of the CNN, can be extended to other applications
Automatic construction of robust spherical harmonic subspaces
In this paper we propose a method to automatically recover a class specific low dimensional spherical harmonic basis from a set of in-the-wild facial images. We combine existing techniques for uncalibrated photometric stereo and low rank matrix decompositions in order to robustly recover a combined model of shape and identity. We build this basis without aid from a 3D model and show how it can be combined with recent efficient sparse facial feature localisation techniques to recover dense 3D facial shape. Unlike previous works in the area, our method is very efficient and is an order of magnitude faster to train, taking only a few minutes to build a model with over 2000 images. Furthermore, it can be used for real-time recovery of facial shape
Recent Advances in Deep Learning Techniques for Face Recognition
In recent years, researchers have proposed many deep learning (DL) methods
for various tasks, and particularly face recognition (FR) made an enormous leap
using these techniques. Deep FR systems benefit from the hierarchical
architecture of the DL methods to learn discriminative face representation.
Therefore, DL techniques significantly improve state-of-the-art performance on
FR systems and encourage diverse and efficient real-world applications. In this
paper, we present a comprehensive analysis of various FR systems that leverage
the different types of DL techniques, and for the study, we summarize 168
recent contributions from this area. We discuss the papers related to different
algorithms, architectures, loss functions, activation functions, datasets,
challenges, improvement ideas, current and future trends of DL-based FR
systems. We provide a detailed discussion of various DL methods to understand
the current state-of-the-art, and then we discuss various activation and loss
functions for the methods. Additionally, we summarize different datasets used
widely for FR tasks and discuss challenges related to illumination, expression,
pose variations, and occlusion. Finally, we discuss improvement ideas, current
and future trends of FR tasks.Comment: 32 pages and citation: M. T. H. Fuad et al., "Recent Advances in Deep
Learning Techniques for Face Recognition," in IEEE Access, vol. 9, pp.
99112-99142, 2021, doi: 10.1109/ACCESS.2021.309613
Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models
Currently there is no complete face recognition system that is invariant to all facial expressions.
Although humans find it easy to identify and recognise faces regardless of changes in illumination,
pose and expression, producing a computer system with a similar capability has proved to
be particularly di cult. Three dimensional face models are geometric in nature and therefore
have the advantage of being invariant to head pose and lighting. However they are still susceptible
to facial expressions. This can be seen in the decrease in the recognition results using
principal component analysis when expressions are added to a data set.
In order to achieve expression-invariant face recognition systems, we have employed a tensor
algebra framework to represent 3D face data with facial expressions in a parsimonious
space. Face variation factors are organised in particular subject and facial expression modes.
We manipulate this using single value decomposition on sub-tensors representing one variation
mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained
environments and still preserves the integrity of the 3D data. The results show improved
recognition rates for faces and facial expressions, even recognising high intensity expressions
that are not in the training datasets.
We have determined, experimentally, a set of anatomical landmarks that best describe facial
expression e ectively. We found that the best placement of landmarks to distinguish di erent
facial expressions are in areas around the prominent features, such as the cheeks and eyebrows.
Recognition results using landmark-based face recognition could be improved with better placement.
We looked into the possibility of achieving expression-invariant face recognition by reconstructing
and manipulating realistic facial expressions. We proposed a tensor-based statistical
discriminant analysis method to reconstruct facial expressions and in particular to neutralise
facial expressions. The results of the synthesised facial expressions are visually more realistic
than facial expressions generated using conventional active shape modelling (ASM). We
then used reconstructed neutral faces in the sub-tensor framework for recognition purposes.
The recognition results showed slight improvement. Besides biometric recognition, this novel
tensor-based synthesis approach could be used in computer games and real-time animation
applications
- …