3 research outputs found
Computational analysis of smile weight distribution across the face for accurate distinction between genuine and posed smiles
YesIn this paper, we report the results of our recent research into the understanding of the exact distribution of a smile across the face, especially the distinction in the weight distribution of a smile between a genuine and a posed smile. To do this, we have developed a computational framework for the analysis of the dynamic motion of various parts of the face during a facial expression, in particular, for the smile expression. The heart of our dynamic smile analysis framework is the use of optical flow intensity variation across the face during a smile. This can be utilised to efficiently map the dynamic motion of individual regions of the face such as the mouth, cheeks and areas around the eyes. Thus, through our computational framework, we infer the exact distribution of weights of the smile across the face. Further, through the utilisation of two publicly available datasets, namely the CK+ dataset with 83 subjects expressing posed smiles and the MUG dataset with 35 subjects expressing genuine smiles, we show there is a far greater activity or weight distribution around the regions of the eyes in the case of a genuine smile.Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035
Recommended from our members
The computational face for facial emotion analysis: Computer based emotion analysis from the face
Facial expressions are considered to be the most revealing way of understanding the human psychological state during face-to-face communication. It is believed that a more natural interaction between humans and machines can be undertaken through the detailed understanding of the different facial expressions which imitate the manner by which humans communicate with each other.
In this research, we study the different aspects of facial emotion detection, analysis and investigate possible hidden identity clues within the facial expressions. We study a deeper aspect of facial expressions whereby we try to identify gender and human identity - which can be considered as a form of emotional biometric - using only the dynamic characteristics of the smile expressions. Further, we present a statistical model for analysing the relationship between facial features and Duchenne (real) and non-Duchenne (posed) smiles. Thus, we identify that the expressions in the eyes contain discriminating features between Duchenne and non-Duchenne smiles.
Our results indicate that facial expressions can be identified through facial movement analysis models where we get an accuracy rate of 86% for classifying the six universal facial expressions and 94% for classifying the common 18 facial action units. Further, we successfully identify the gender using only the dynamic characteristics of the smile expression whereby we obtain an 86% classification rate. Likewise, we present a framework to study the possibility of using the smile as a biometric whereby we show that the human smile is unique and stable.Al-Zaytoonah Universit
On Gender Identification Using the Smile Dynamics
NoGender classification has multiple applications including, but not limited to, face perception, age, ethnicity and identity analysis, video surveillance and smart human computer interaction. The majority of computer based gender classification algorithms analyse the appearance of facial features predominantly based on the texture of the static image of the face. In this paper, we propose a novel algorithm for gender classification using the smile dynamics without resorting to the use of any facial texture information. Our experiments suggest that this method has great potential for finding indicators of gender dimorphism. Our approach was tested on two databases, namely the CK+ and the MUG, consisting of a total of 80 subjects. As a result, using the KNN algorithm along with 10-fold cross validation, we achieve an accurate classification rate of 80% for gender simply based on the dynamics of a person's smile