107,054 research outputs found

    The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression

    Get PDF
    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information

    Holistic gaze strategy to categorize facial expression of varying intensities

    Get PDF
    Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants’ categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region), however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions

    Motion-capture patterns of dynamic facial expressions in children and adolescents with and without ASD

    Get PDF
    Research shows that neurotypical individuals struggle to interpret the emotional facial expressions of people with Autism Spectrum Disorder (ASD). The current study uses motion-capture to objectively quantify differences between the movement patterns of emotional facial expressions of individuals with and without ASD. Participants volitionally mimicked emotional expressions while wearing facial markers. Recorded marker movement was grouped by expression valence and intensity. We used Growth Curve Analysis to test whether movement patterns were predictable by expression type and participant group. Results show significant interactions between expression type and group, and little effect of emotion valence on ASD expressions. Together, results support perceptions that expressions of individuals with ASD are different from -- and more ambiguous than -- those of neurotypical individuals’

    Sex Differences in Emotion Recognition and Emotional Inferencing Following Severe Traumatic Brain Injury

    Get PDF
    The primary objective of the current study was to determine if men and women with traumatic brain injury (TBI) differ in their emotion recognition and emotional inferencing abilities. In addition to overall accuracy, we explored whether differences were contingent upon the target emotion for each task, or upon high- and low-intensity facial and vocal emotion expressions. A total of 160 participants (116 men) with severe TBI completed three tasks – a task measuring facial emotion recognition (DANVA-Faces), vocal emotion recognition (DANVA-Voices) and one measuring emotional inferencing (emotional inference from stories test (EIST)). Results showed that women with TBI were significantly more accurate in their recognition of vocal emotion expressions and also for emotional inferencing. Further analyses of task performance showed that women were significantly better than men at recognising fearful facial expressions and also facial emotion expressions high in intensity. Women also displayed increased response accuracy for sad vocal expressions and low-intensity vocal emotion expressions. Analysis of the EIST task showed that women were more accurate than men at emotional inferencing in sad and fearful stories. A similar proportion of women and men with TBI were impaired (≥ 2 SDs when compared to normative means) at facial emotion perception, χ2 = 1.45, p = 0.228, but a larger proportion of men was impaired at vocal emotion recognition, χ2 = 7.13, p = 0.008, and emotional inferencing, χ2 = 7.51, p = 0.006

    Faking Faces: Psychopathic Traits and Feigned Emotional Expressions

    Get PDF
    The purpose of this study was to determine what effect psychopathic traits have on the ability to express both genuine and feigned emotional expressions through a detailed analysis of facial characteristics of emotion. Despite the wide array of research on psychopathic traits and emotional dysfunction, most studies have focused on recognition rather than expression of emotion. Participants (n = 121) were assessed for psychopathic traits and randomly assigned into a feigned or genuine emotional condition, and asked to display each of the six core emotions (i.e., happiness, fear, anger, surprise, disgust, and sadness). Each face was then coded for the presence of facial musculature action units using a standardized coding system. Results indicated that those feigned group produced more authentic facial expressions than their genuine counterparts. Limited main effects were found related to psychopathy and overall facial expressions; however, interesting patterns of specific action units were noted. Specifically, those high in psychopathic traits engaged in more authentic and pronounced expressions of specific facial musculature movements in some emotional expressions (i.e., fear and disgust). Implications concerning methods of coding, emotion induction, and facial affective mimicry are discussed. Discipline: Psychology Honours Faculty Mentor: Dr. Kristine Peace &nbsp

    Cultural-based visual expression: Emotional analysis of human face via Peking Opera Painted Faces (POPF)

    Get PDF
    © 2015 The Author(s) Peking Opera as a branch of Chinese traditional cultures and arts has a very distinct colourful facial make-up for all actors in the stage performance. Such make-up is stylised in nonverbal symbolic semantics which all combined together to form the painted faces to describe and symbolise the background, the characteristic and the emotional status of specific roles. A study of Peking Opera Painted Faces (POPF) was taken as an example to see how information and meanings can be effectively expressed through the change of facial expressions based on the facial motion within natural and emotional aspects. The study found that POPF provides exaggerated features of facial motion through images, and the symbolic semantics of POPF provides a high-level expression of human facial information. The study has presented and proved a creative structure of information analysis and expression based on POPF to improve the understanding of human facial motion and emotion

    On the connection between level of education and the neural circuitry of emotion perception

    Get PDF
    Through education, a social group transmits accumulated knowledge, skills, customs, and values to its members. So far, to the best of our knowledge, the association between educational attainment and neural correlates of emotion processing has been left unexplored. In a retrospective analysis of The Netherlands Study of Depression and Anxiety (NESDA) functional magnetic resonance imaging (fMRI) study, we compared two groups of fourteen healthy volunteers with intermediate and high educational attainment, matched for age and gender. The data concerned event-related fMRI of brain activation during perception of facial emotional expressions. The region of interest (ROI) analysis showed stronger right amygdala activation to facial expressions in participants with lower relative to higher educational attainment (HE). The psychophysiological interaction analysis revealed that participants with HE exhibited stronger right amygdala-right insula connectivity during perception of emotional and neutral facial expressions. This exploratory study suggests the relevance of educational attainment on the neural mechanism of facial expressions processing

    FAMOS: a framework for investigating the use of face features to identify spontaneous emotions

    Get PDF
    © 2017, Springer-Verlag London Ltd., part of Springer Nature. Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real-world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through spontaneous facial expressions. The method consists of a new four-dimensional model, called FAMOS, to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences using a semi-automatic facial expression analyser as ground truth for describing the facial actions. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions, especially in terms of expressiveness

    Analysing the Direction of Emotional Influence in Nonverbal Dyadic Communication: A Facial-Expression Study

    Full text link
    Identifying the direction of emotional influence in a dyadic dialogue is of increasing interest in the psychological sciences with applications in psychotherapy, analysis of political interactions, or interpersonal conflict behavior. Facial expressions are widely described as being automatic and thus hard to overtly influence. As such, they are a perfect measure for a better understanding of unintentional behavior cues about social-emotional cognitive processes. With this view, this study is concerned with the analysis of the direction of emotional influence in dyadic dialogue based on facial expressions only. We exploit computer vision capabilities along with causal inference theory for quantitative verification of hypotheses on the direction of emotional influence, i.e., causal effect relationships, in dyadic dialogues. We address two main issues. First, in a dyadic dialogue, emotional influence occurs over transient time intervals and with intensity and direction that are variant over time. To this end, we propose a relevant interval selection approach that we use prior to causal inference to identify those transient intervals where causal inference should be applied. Second, we propose to use fine-grained facial expressions that are present when strong distinct facial emotions are not visible. To specify the direction of influence, we apply the concept of Granger causality to the time series of facial expressions over selected relevant intervals. We tested our approach on newly, experimentally obtained data. Based on the quantitative verification of hypotheses on the direction of emotional influence, we were able to show that the proposed approach is most promising to reveal the causal effect pattern in various instructed interaction conditions.Comment: arXiv admin note: text overlap with arXiv:1810.1217
    corecore