48,188 research outputs found

    Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional Understanding

    Get PDF
    Facial expressions provide valuable information in making judgments about internal emotional states. Evaluation of facial expressions can occur through mimicry processes via the mirror neuron system (MNS) pathway, where a decoder mimics a target’s facial expression and proprioceptive perception prompts emotion recognition. Female participants rated emotional facial expressions when mimicry was inhibited by immobilization of facial muscles and when mimicry was uncontrolled, and were evaluated for self-expressiveness level. A mixed ANOVA was conducted to determine how self-expressiveness level and manipulation of facial muscles impacted recognition accuracy for facial expressions. Main effects of self-expressiveness level and facial muscle manipulation were not found to be significant (p \u3e .05), nor did these variables appear to interact (p \u3e .05). The results of this study suggest that an individual’s self-expressiveness level and use of mimicry processes may not play a central role in emotion recognition

    Web-based visualisation of head pose and facial expressions changes: monitoring human activity using depth data

    Full text link
    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from different sets of data, we introduce a system capable of monitoring human activity through head pose and facial expression changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor). An approach build on discriminative random regression forests was selected in order to rapidly and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data exchange format (JavaScript Object Notation-JSON) is employed, in order to manipulate the data extracted from the two aforementioned settings. Such mechanism can yield a platform for objective and effortless assessment of human activity within the context of serious gaming and human-computer interaction.Comment: 8th Computer Science and Electronic Engineering, (CEEC 2016), University of Essex, UK, 6 page

    Speech-based recognition of self-reported and observed emotion in a dimensional space

    Get PDF
    The differences between self-reported and observed emotion have only marginally been investigated in the context of speech-based automatic emotion recognition. We address this issue by comparing self-reported emotion ratings to observed emotion ratings and look at how differences between these two types of ratings affect the development and performance of automatic emotion recognizers developed with these ratings. A dimensional approach to emotion modeling is adopted: the ratings are based on continuous arousal and valence scales. We describe the TNO-Gaming Corpus that contains spontaneous vocal and facial expressions elicited via a multiplayer videogame and that includes emotion annotations obtained via self-report and observation by outside observers. Comparisons show that there are discrepancies between self-reported and observed emotion ratings which are also reflected in the performance of the emotion recognizers developed. Using Support Vector Regression in combination with acoustic and textual features, recognizers of arousal and valence are developed that can predict points in a 2-dimensional arousal-valence space. The results of these recognizers show that the self-reported emotion is much harder to recognize than the observed emotion, and that averaging ratings from multiple observers improves performance

    Genetic algorithms reveal profound individual differences in emotion recognition.

    Get PDF
    Emotional communication relies on a mutual understanding, between expresser and viewer, of facial configurations that broadcast specific emotions. However, we do not know whether people share a common understanding of how emotional states map onto facial expressions. This is because expressions exist in a high-dimensional space too large to explore in conventional experimental paradigms. Here, we address this by adapting genetic algorithms and combining them with photorealistic three-dimensional avatars to efficiently explore the high-dimensional expression space. A total of 336 people used these tools to generate facial expressions that represent happiness, fear, sadness, and anger. We found substantial variability in the expressions generated via our procedure, suggesting that different people associate different facial expressions to the same emotional state. We then examined whether variability in the facial expressions created could account for differences in performance on standard emotion recognition tasks by asking people to categorize different test expressions. We found that emotion categorization performance was explained by the extent to which test expressions matched the expressions generated by each individual. Our findings reveal the breadth of variability in people's representations of facial emotions, even among typical adult populations. This has profound implications for the interpretation of responses to emotional stimuli, which may reflect individual differences in the emotional category people attribute to a particular facial expression, rather than differences in the brain mechanisms that produce emotional responses

    The emotional valence of subliminal priming effects perception of facial expressions

    Full text link
    We investigated, in young healthy subjects, how the affective content of subliminally presented priming images and their specific visual attributes impacted conscious perception of facial expressions. The priming images were broadly categorised as aggressive, pleasant, or neutral and further subcategorised by the presence of a face and by the centricity (egocentric or allocentric vantage-point) of the image content. Subjects responded to the emotion portrayed in a pixelated target-face by indicating via key-press if the expression was angry or neutral. Priming images containing a face compared to those not containing a face significantly impaired performance on neutral or angry targetface evaluation. Recognition of angry target-face expressions was selectively impaired by pleasant prime images which contained a face. For egocentric primes, recognition of neutral target-face expressions was significantly better than of angry expressions. Our results suggest that, first, the affective primacy hypothesis which predicts that affective information can be accessed automatically, preceding conscious cognition, holds true in subliminal priming only when the priming image contains a face. Second, egocentric primes interfere with the perception of angry target-face expressions suggesting that this vantage-point, directly relevant to the viewer, perhaps engages processes involved in action preparation which may weaken the priority of affect processing.Accepted manuscrip

    Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    Get PDF
    Objectives: Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Results: Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. Conclusions: These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2016, 22, 1–11

    Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus)

    Get PDF
    Background Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. Methods The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants’ level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques’ facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. Results Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques’ facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed faces, and a trend was found for aggressive faces. However, these participants, even when previously exposed to pictures, had difficulties in recognising aggressive, distressed and friendly faces above chance level. Discussion These results do not support the universality hypothesis as exposed and naïve participants had difficulties in correctly identifying aggressive, distressed and friendly faces. Exposure to facial expressions improved their correct recognition. In addition, the findings suggest that providing simple exposure to 2D pictures (for example, information signs explaining animals’ facial signalling in zoos or animal parks) is not a sufficient educational tool to reduce tourists’ misinterpretations of macaque emotion. Additional measures, such as keeping a safe distance between tourists and wild animals, as well as reinforcing learning via videos or supervised visits led by expert guides, could reduce such issues and improve both animal welfare and tourist experience

    The role of infants’ mother-directed gaze, maternal sensitivity, and emotion recognition in childhood callous unemotional behaviours

    Get PDF
    While some children with callous unemotional (CU) behaviours show difficulty recognizing emotional expressions, the underlying developmental pathways are not well understood. Reduced infant attention to the caregiver's face and a lack of sensitive parenting have previously been associated with emerging CU features. The current study examined whether facial emotion recognition mediates the association between infants' mother-directed gaze, maternal sensitivity, and later CU behaviours. Participants were 206 full-term infants and their families from a prospective longitudinal study, the Durham Child Health and Development Study (DCHDS). Measures of infants' mother-directed gaze, and maternal sensitivity were collected at 6 months, facial emotion recognition performance at 6 years, and CU behaviours at 7 years. A path analysis showed a significant effect of emotion recognition predicting CU behaviours (β = -0.275, S.E. = 0.084, p = 0.001). While the main effects of infants' mother-directed gaze and maternal sensitivity were not significant, their interaction significantly predicted CU behaviours (β = 0.194, S.E. = 0.081, p = 0.016) with region of significance analysis showing a significant negative relationship between infant gaze and later CU behaviours only for those with low maternal sensitivity. There were no indirect effects of infants' mother-directed gaze, maternal sensitivity or the mother-directed gaze by maternal sensitivity interaction via emotion recognition. Emotion recognition appears to act as an independent predictor of CU behaviours, rather than mediating the relationship between infants' mother-directed gaze and maternal sensitivity with later CU behaviours. This supports the idea of multiple risk factors for CU behaviours

    Impaired Recognition of Basic Emotions from Facial Expressions in Young People with Autism Spectrum Disorder: Assessing the Importance of Expression Intensity

    Get PDF
    It has been proposed that impairments in emotion recognition in ASD are greater for more subtle expressions of emotion. We measured recognition of 6 basic facial expressions at 8 intensity levels in young people (6–16 years) with ASD (N = 63) and controls (N = 64) via an Internet platform. Participants with ASD were less accurate than controls at labelling expressions across intensity levels, although differences at very low levels were not detected due to floor effects. Recognition accuracy did not correlate with parent-reported social functioning in either group. These findings provide further evidence for an impairment in recognition of basic emotion in ASD and do not support the idea that this impairment is limited solely to low intensity expressions.Sarah Griffiths was supported by a University of Bristol Science PhD Scholarship. This study was supported in part by the Medical Research Council and the University of Bristol (MC_UU_12013/6)

    Audience mood estimation for the Waseda Anthropomorphic Saxophonist 5 (WAS-5) using cloud cognitive services

    Get PDF
    In this paper, a system to recognize the emotions of the audience from facial expressions is proposed for the Waseda Anthropomorphic Saxophonist ver. 5. This system will be used to assess the mood of audience and adapt the emotional connotation of the robot musical performance. General models for emotion definition have been analyzed and Ekman’s model has been chosen. Emotion recognition is performed via cloud computing to implement scalable solutions
    • …
    corecore