3 research outputs found
Computational analysis of smile weight distribution across the face for accurate distinction between genuine and posed smiles
YesIn this paper, we report the results of our recent research into the understanding of the exact distribution of a smile across the face, especially the distinction in the weight distribution of a smile between a genuine and a posed smile. To do this, we have developed a computational framework for the analysis of the dynamic motion of various parts of the face during a facial expression, in particular, for the smile expression. The heart of our dynamic smile analysis framework is the use of optical flow intensity variation across the face during a smile. This can be utilised to efficiently map the dynamic motion of individual regions of the face such as the mouth, cheeks and areas around the eyes. Thus, through our computational framework, we infer the exact distribution of weights of the smile across the face. Further, through the utilisation of two publicly available datasets, namely the CK+ dataset with 83 subjects expressing posed smiles and the MUG dataset with 35 subjects expressing genuine smiles, we show there is a far greater activity or weight distribution around the regions of the eyes in the case of a genuine smile.Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035
Recommended from our members
The computational face for facial emotion analysis: Computer based emotion analysis from the face
Facial expressions are considered to be the most revealing way of understanding the human psychological state during face-to-face communication. It is believed that a more natural interaction between humans and machines can be undertaken through the detailed understanding of the different facial expressions which imitate the manner by which humans communicate with each other.
In this research, we study the different aspects of facial emotion detection, analysis and investigate possible hidden identity clues within the facial expressions. We study a deeper aspect of facial expressions whereby we try to identify gender and human identity - which can be considered as a form of emotional biometric - using only the dynamic characteristics of the smile expressions. Further, we present a statistical model for analysing the relationship between facial features and Duchenne (real) and non-Duchenne (posed) smiles. Thus, we identify that the expressions in the eyes contain discriminating features between Duchenne and non-Duchenne smiles.
Our results indicate that facial expressions can be identified through facial movement analysis models where we get an accuracy rate of 86% for classifying the six universal facial expressions and 94% for classifying the common 18 facial action units. Further, we successfully identify the gender using only the dynamic characteristics of the smile expression whereby we obtain an 86% classification rate. Likewise, we present a framework to study the possibility of using the smile as a biometric whereby we show that the human smile is unique and stable.Al-Zaytoonah Universit
Recommended from our members
Computational Face Recognition Using Machine Learning Models
Faces are among the most complex stimuli that the human visual system
processes. Growing commercial interest in face recognition is encouraging, but it
also turns out to be a challenging endeavour. These challenges arise when the
situations are complex and cause varied facial appearance due to e.g., occlusion,
low-resolution, and ageing. The problem of computer-based face recognition
using partial facial data is still largely an unexplored area of research and how
does computer interpret various parts of the face. Another challenge is age
progression and regression, which is considered to be the most revealing topic
for understanding the human face changes during life.
In this research, the various computational face recognition models are
investigated to overcome the challenges posed by ageing and occlusions/partial
faces. For partial face-based face recognition, a pre-trained VGGF model is
employed for feature extraction and then followed by popular classifiers such as
SVMs and Cosine Similarity CS for classification. In this framework, parts of faces
such as eyes, nose, forehead, are used individually for training and testing. The
results showing that there is an improvement in recognition in small parts, such
as recognition rate in forehead enhanced form about 0% to nearly 35%, eyes
from about 22% to approximately 65%. In the second framework, five sub-models
were built based on Convolutional Neural Networks (CNNs) and those models
are named Eyes-CNNs, Nose-CNNs, Mouth-CNNs, Forehead-CNNs, and
combined EyesNose-CNNs. The experimental results illustrate a high recognition
rate when it comes to small parts, for example, eyes increased up to about
90.83% and forehead reached about 44.5%. Furthermore, the challenge of face
ageing is also approached by proposing an age-template based framework,
generating an age-based face template for enhanced face generation and
recognition. The results showing that generated new aged faces are more reliable
comparing with state-of-the-art