415,663 research outputs found

    Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    Get PDF
    Wegrzyn M, Vogt M, Kireclioglu B, Schneider J, Kißler J. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLOS ONE. 2017;12(5): e0177239.Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation

    Face Recognition with Attention Mechanisms

    Get PDF
    Face recognition has been widely used in people’s daily lives due to its contactless process and high accuracy. Existing works can be divided into two categories: global and local approaches. The mainstream global approaches usually extract features on whole faces. However, global faces tend to suffer from dramatic appearance changes under the scenarios of large pose variations, heavy occlusions, and so on. On the other hand, since some local patches may remain similar, they can play an important role in such scenarios. Existing local approaches mainly rely on cropping local patches around facial landmarks and then extracting corresponding local representations. However, facial landmark detection may be inaccurate or even fail, which would limit their applications. To address this issue, attention mechanisms are applied to automatically locate discriminative facial parts, while suppressing noisy parts. Following this motivation, several models are proposed, including: the Local multi-Scale Convolutional Neural Networks (LS-CNN), Hierarchical Pyramid Diverse Attention (HPDA) networks, Contrastive Quality-aware Attentions (CQA-Face), Diverse and Sparse Attentions (DSA-Face), and Attention Augmented Networks (AAN-Face). Firstly, a novel spatial attention (local aggregation networks, LANet) is proposed to adaptively locate useful facial parts. Meanwhile, different facial parts may appear at different scales due to pose variations and expression changes. In order to solve this issue, LS-CNN are proposed to extract discriminative local information at different scales. Secondly, it is observed that some important facial parts may be neglected, if without a proper guidance. Besides, hierarchical features from different layers are not fully exploited which can contain rich low-level and high-level information. To overcome these two issues, HPDA are proposed. Specifically, a diverse learning is proposed to enlarge the Euclidean distances between each two spatial attention maps, locating diverse facial parts. Besides, hierarchical bilinear pooling is adopted to effectively combine features from different layers. Thirdly, despite the decent performance of the HPDA, the Euclidean distance may not be flexible enough to control the distances between each two attention maps. Further, it is also important to assign different quality scores for various local patches because various facial parts contain information with various importance, especially for faces with heavy occlusions, large pose variations, or quality changes. The CQA-Face is proposed which mainly consists of the contrastive attention learning and quality-aware networks where the former proposes a better distance function to enlarge the distances between each two attention maps and the latter applies a graph convolutional network to effectively learn the relations among different facial parts, assigning higher quality scores for important patches and smaller values for less useful ones. Fourthly, the attention subset problem may occur where some attention maps are subsets of other attention maps. Consequently, the learned facial parts are not diverse enough to cover every facial detail, leading to inferior results. In our DSA-Face model, a new pairwise self-constrastive attention is proposed which considers the complement of subset attention maps in the loss function to address the aforementioned attention subset problem. Moreover, a attention sparsity loss is proposed to suppress the responses around noisy image regions, especially for masked faces. Lastly, in existing popular face datasets, some characteristics of facial images (e.g. frontal faces) are over-represented, while some characteristics (e.g. profile faces) are under-represented. In AAN-Face model, attention erasing is proposed to simulate various occlusion levels. Besides, attention center loss is proposed to control the responses on each attention map, guiding it to focus on the similar facial part. Our works have greatly improved the performance of cross-pose, cross-quality, cross-age, cross-modality, and masked face matching tasks

    A survey on mouth modeling and analysis for Sign Language recognition

    Get PDF
    © 2015 IEEE.Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR

    Artificial Intelligence Tools for Facial Expression Analysis.

    Get PDF
    Inner emotions show visibly upon the human face and are understood as a basic guide to an individual’s inner world. It is, therefore, possible to determine a person’s attitudes and the effects of others’ behaviour on their deeper feelings through examining facial expressions. In real world applications, machines that interact with people need strong facial expression recognition. This recognition is seen to hold advantages for varied applications in affective computing, advanced human-computer interaction, security, stress and depression analysis, robotic systems, and machine learning. This thesis starts by proposing a benchmark of dynamic versus static methods for facial Action Unit (AU) detection. AU activation is a set of local individual facial muscle parts that occur in unison constituting a natural facial expression event. Detecting AUs automatically can provide explicit benefits since it considers both static and dynamic facial features. For this research, AU occurrence activation detection was conducted by extracting features (static and dynamic) of both nominal hand-crafted and deep learning representation from each static image of a video. This confirmed the superior ability of a pretrained model that leaps in performance. Next, temporal modelling was investigated to detect the underlying temporal variation phases using supervised and unsupervised methods from dynamic sequences. During these processes, the importance of stacking dynamic on top of static was discovered in encoding deep features for learning temporal information when combining the spatial and temporal schemes simultaneously. Also, this study found that fusing both temporal and temporal features will give more long term temporal pattern information. Moreover, we hypothesised that using an unsupervised method would enable the leaching of invariant information from dynamic textures. Recently, fresh cutting-edge developments have been created by approaches based on Generative Adversarial Networks (GANs). In the second section of this thesis, we propose a model based on the adoption of an unsupervised DCGAN for the facial features’ extraction and classification to achieve the following: the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild), and the recognition of emotion categories and AUs, in an attempt to resolve the problem of recognising the static seven classes of emotion in the wild. Thorough experimentation with the proposed cross-database performance demonstrates that this approach can improve the generalization results. Additionally, we showed that the features learnt by the DCGAN process are poorly suited to encoding facial expressions when observed under multiple views, or when trained from a limited number of positive examples. Finally, this research focuses on disentangling identity from expression for facial expression recognition. A novel technique was implemented for emotion recognition from a single monocular image. A large-scale dataset (Face vid) was created from facial image videos which were rich in variations and distribution of facial dynamics, appearance, identities, expressions, and 3D poses. This dataset was used to train a DCNN (ResNet) to regress the expression parameters from a 3D Morphable Model jointly with a back-end classifier

    Featural and configurational processes in the recognition of faces of different familiarity

    Get PDF
    Previous research suggests that face recognition may involve both configurational and piecemeal (featural) processing. To explore the relationship between these processing modes, we examined the patterns of recognition impairment produced by blurring, inversion, and scrambling, both singly and in various combinations. Two tasks were used: recognition of unfamiliar faces (seen once before) and recognition of highly familiar faces (celebrities). The results provide further support for a configurational - featural distinction. Recognition performance remained well above chance if faces were blurred, scrambled, inverted, or simultaneously inverted and scrambled: each of these manipulations disrupts either configurational or piecemeal processing, leaving the other mode available as a route to recognition. However, blurred/scrambled and blurred/inverted faces were recognised at or near chance levels, presumably because both configurational processing and featural processing were disrupted. Similar patterns of effects were found for both familiar and unfamiliar faces, suggesting that the relationship between configurational and featural processing is qualitatively similar in both cases
    • …
    corecore