60,680 research outputs found

    Facial emotion expressions in human-robot interaction: A survey

    Get PDF
    Facial expressions are an ideal means of communicating one's emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real time will be covered. For robotic facial expression generation, hand coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real time is comparatively lower. In case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically.Comment: Pre-print version. Accepted in International Journal of Social Robotic

    Facial Emotion Expressions in Human-Robot Interaction: A Survey

    Get PDF
    Facial expressions are an ideal means of communicating one's emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In the case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real-time will be covered. For robotic facial expression generation, hand-coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand-coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real-time is comparatively lower. In the case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically. In this overview, state-of-the-art research in facial emotion expressions during human-robot interaction has been discussed leading to several possible directions for future research

    Facial expression transfer method based on frequency analysis

    Get PDF
    We propose a novel expression transfer method based on an analysis of the frequency of multi-expression facial images. We locate the facial features automatically and describe the shape deformations between a neutral expression and non-neutral expressions. The subtle expression changes are important visual clues to distinguish different expressions. These changes are more salient in the frequency domain than in the image domain. We extract the subtle local expression deformations for the source subject, coded in the wavelet decomposition. This information about expressions is transferred to a target subject. The resulting synthesized image preserves both the facial appearance of the target subject and the expression details of the source subject. This method is extended to dynamic expression transfer to allow a more precise interpretation of facial expressions. Experiments on Japanese Female Facial Expression (JAFFE), the extended Cohn-Kanade (CK+) and PIE facial expression databases show the superiority of our method over the state-of-the-art method

    The Influence of Pain Intensity and Executive Functioning on Facial Pain Expression

    Get PDF
    Facial pain expressions are frequently used to assess pain in populations that cannot verbally express their suffering. The present study aimed to investigate the usefulness of facial expressions as an assessment tool and the influence of executive functioning on facial pain expression. Pain ratings to mechanical nociceptive stimuli were obtained from 57 healthy elderly, facial pain expressions were filmed and coded, working memory and cognitive inhibition were assessed. Results showed a positive correlation between stimulus intensity and pain expressions which was moderated by cognitive inhibition. Pain intensity has a stronger effect on facial pain expression at low levels of inhibition

    MoFaNeRF: Morphable Facial Neural Radiance Field

    Full text link
    We propose a parametric model that maps free-view images into a vector space of coded facial shape, expression and appearance with a neural radiance field, namely Morphable Facial NeRF. Specifically, MoFaNeRF takes the coded facial shape, expression and appearance along with space coordinate and view direction as input to an MLP, and outputs the radiance of the space point for photo-realistic image synthesis. Compared with conventional 3D morphable models (3DMM), MoFaNeRF shows superiority in directly synthesizing photo-realistic facial details even for eyes, mouths, and beards. Also, continuous face morphing can be easily achieved by interpolating the input shape, expression and appearance codes. By introducing identity-specific modulation and texture encoder, our model synthesizes accurate photometric details and shows strong representation ability. Our model shows strong ability on multiple applications including image-based fitting, random generation, face rigging, face editing, and novel view synthesis. Experiments show that our method achieves higher representation ability than previous parametric models, and achieves competitive performance in several applications. To the best of our knowledge, our work is the first facial parametric model built upon a neural radiance field that can be used in fitting, generation and manipulation. The code and data is available at https://github.com/zhuhao-nju/mofanerf.Comment: accepted to ECCV2022; code available at http://github.com/zhuhao-nju/mofaner

    Facial expression recognition using histogram variances faces

    Full text link
    In human's expression recognition, the representation of expression features is essential for the recognition accuracy. In this work we propose a novel approach for extracting expression dynamic features from facial expression videos. Rather than utilising statistical models e.g. Hidden Markov Model (HMM), our approach integrates expression dynamic features into a static image, the Histogram Variances Face (HVF), by fusing histogram variances among the frames in a video. The HVFs can be automatically obtained from videos with different frame rates and immune to illumination interference. In our experiments, for the videos picturing the same facial expression, e.g., surprise, happy and sadness etc., their corresponding HVFs are similar, even though the performers and frame rates are different. Therefore the static facial recognition approaches can be utilised for the dynamic expression recognition. We have applied this approach on the well-known Cohn-Kanade AU-Coded Facial Expression database then classified HVFs using PCA and Support Vector Machine (SVMs), and found the accuracy of HVFs classification is very encouraging. © 2009 IEEE

    Expression Dependence in the Perception of Facial Identity

    Get PDF
    We recognise familiar faces irrespective of their expression. This ability, crucial for social interactions, is a fundamental feature of face perception. We ask whether this constancy of facial identity may be compromised by changes in expression. This, in turn, addresses the issue of whether facial identity and expression are processed separately or interact. Using an identification task, participants learned the identities of two actors from naturalistic (so-called ambient) face images taken from movies. Training was either with neutral images or their expressive counterparts, perceived expressiveness having been determined experimentally. Expressive training responses were slower and more erroneous than neutral training responses. When tested with novel images of the actors that varied in expressiveness, neutrally trained participants gave slower and less accurate responses to images of high compared with low expressiveness. These findings clearly demonstrate that facial expressions impede the processing and learning of facial identity. Because this expression dependence is consistent with a late bifurcation model of face processing, in which changeable facial aspects and identity are coded in a common framework, it suggests that expressions are a part of facial identity representation. </jats:p

    Development and application of CatFACS: are human cat adopters influenced by cat facial expressions?

    Get PDF
    The domestic cat (Felis silvestris catus) is quickly becoming the most popular animal companion in the world. The evolutionary processes that occur during domestication are known to have wide effects on the morphology, behaviour, cognition and communicative abilities of a species. Since facial expression is central to human communication, it is possible that cat facial expression has been subjected to selection during domestication. Standardised measurement techniques to study cat facial expression are, however, currently lacking. Here, as a first step to enable cat facial expression to be studied in an anatomically based and objective way, CatFACS (Cat Facial Action Coding System) was developed. Fifteen individual facial movements (Action Units), six miscellaneous movements (Action Descriptors) and seven Ear Action Descriptors were identified in the domestic cat. CatFACS was then applied to investigate the impact of cat facial expression on human preferences in an adoption shelter setting. Rehoming speed from cat shelters was used as a proxy for human selective pressure. The behaviour of 106 cats ready for adoption in three different shelters was recorded during a standardised encounter with an experimenter. This experimental setup aimed to mimic the first encounter of a cat with a potential adopter, i.e. an unfamiliar human. Each video was coded for proximity to the experimenter, body movements, tail movements and face movements. Cat facial movements were not related to rehoming speed, suggesting that cat facial expression may not have undergone significant selection. In contrast, rubbing frequency was positively related to rehoming speed. The findings suggest that humans are more influenced by overt prosocial behaviours than subtle facial expression in domestic cats

    Objective Classes for Micro-Facial Expression Recognition

    Full text link
    Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D feature descriptors. The experiments are evaluated on two benchmark FACS coded datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.Comment: 11 pages, 4 figures and 5 tables. This paper will be submitted for journal revie
    • …
    corecore