483 research outputs found

    The Social Perception of Emotional Abilities: Expanding What We Know About Observer Ratings of Emotional Intelligence

    Get PDF
    We examine the social perception of emotional intelligence (EI) through the use of observer ratings. Individuals frequently judge others’ emotional abilities in real-world settings, yet we know little about the properties of such ratings. This article examines the social perception of EI and expands the evidence to evaluate its reliability and cross-judge agreement, as well as its convergent, divergent, and predictive validity. Three studies use real-world colleagues as observers and data from 2,521 participants. Results indicate significant consensus across observers about targets’ EI, moderate but significant self–observer agreement, and modest but relatively consistent discriminant validity across the components of EI. Observer ratings significantly predicted interdependent task performance, even after controlling for numerous factors. Notably, predictive validity was greater for observer-rated than for self-rated or ability-tested EI. We discuss the minimal associations of observer ratings with ability-tested EI, study limitations, future directions, and practical implications

    Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time

    Get PDF
    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of “biologically basic to socially specific” information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four

    Holistic processing, contact, and the other-race effect in face recognition

    Get PDF
    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing

    Human Perception of Fear in Dogs Varies According to Experience with Dogs

    Get PDF
    To investigate the role of experience in humans’ perception of emotion using canine visual signals, we asked adults with various levels of dog experience to interpret the emotions of dogs displayed in videos. The video stimuli had been pre-categorized by an expert panel of dog behavior professionals as showing examples of happy or fearful dog behavior. In a sample of 2,163 participants, the level of dog experience strongly predicted identification of fearful, but not of happy, emotional examples. The probability of selecting the “fearful” category to describe fearful examples increased with experience and ranged from.30 among those who had never lived with a dog to greater than.70 among dog professionals. In contrast, the probability of selecting the “happy” category to describe happy emotional examples varied little by experience, ranging from.90 to.93. In addition, the number of physical features of the dog that participants reported using for emotional interpretations increased with experience, and in particular, more-experienced respondents were more likely to attend to the ears. Lastly, more-experienced respondents provided lower difficulty and higher accuracy self-ratings than less-experienced respondents when interpreting both happy and fearful emotional examples. The human perception of emotion in other humans has previously been shown to be sensitive to individual differences in social experience, and the results of the current study extend the notion of experience-dependent processes from the intraspecific to the interspecific domain

    Exploring Emotion Representation to Support Dialogue in Police Training on Child Interviewing

    Get PDF
    Police officers when dealing with interviewing children have to cope with a complex set of emotions from a vulnerable witness. Triggers for recognising those emotions and how to build rapport are often the basis of learning exercises. However, current training pulls together the full complexity of emotions during role-playing which can be over-whelming and reduce appropriate learning focus. Interestingly a serious game’s interface can provide valuable training not because it represents full complex, multimedia interactions but because it can restrict emotional complexity and increase focus during the interactions on key factors for emotional recognition. The focus of this paper is to report on a specific aspect that was explored during the development of a serious game that aims to address the current police-training needs of child interviewing techniques, where the recognition of emotions plays an important role in understanding how to build rapport with children. The review of literature reveals that emotion recognition, through facial expressions, can contribute significantly to the perceived quality of communication. For this study an ‘emotions map’ was created and tested by 41 participants to be used in the development of a targeted interface design to support the different levels of emotion recognition. The emotions identified were validated with a 70 % agreement across experts and non-experts highlighting the innate role of emotion recognition. A discussion is made around the role of emotions and game-based systems to support their identification for work-based training. As part of the graphical development of the Child Interview Stimulator (CIS) we examined different levels of emotional recognition that can be used to support the in-game graphical representation of a child’s response during a police interview

    Facial expression training optimises viewing strategy in children and adults

    Get PDF
    This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions

    On the Perception of Religious Group Membership from Faces

    Get PDF
    BACKGROUND: The study of social categorization has largely been confined to examining groups distinguished by perceptually obvious cues. Yet many ecologically important group distinctions are less clear, permitting insights into the general processes involved in person perception. Although religious group membership is thought to be perceptually ambiguous, folk beliefs suggest that Mormons and non-Mormons can be categorized from their appearance. We tested whether Mormons could be distinguished from non-Mormons and investigated the basis for this effect to gain insight to how subtle perceptual cues can support complex social categorizations. METHODOLOGY/PRINCIPAL FINDINGS: Participants categorized Mormons' and non-Mormons' faces or facial features according to their group membership. Individuals could distinguish between the two groups significantly better than chance guessing from their full faces and faces without hair, with eyes and mouth covered, without outer face shape, and inverted 180°; but not from isolated features (i.e., eyes, nose, or mouth). Perceivers' estimations of their accuracy did not match their actual accuracy. Exploration of the remaining features showed that Mormons and non-Mormons significantly differed in perceived health and that these perceptions were related to perceptions of skin quality, as demonstrated in a structural equation model representing the contributions of skin color and skin texture. Other judgments related to health (facial attractiveness, facial symmetry, and structural aspects related to body weight) did not differ between the two groups. Perceptions of health were also responsible for differences in perceived spirituality, explaining folk hypotheses that Mormons are distinct because they appear more spiritual than non-Mormons. CONCLUSIONS/SIGNIFICANCE: Subtle markers of group membership can influence how others are perceived and categorized. Perceptions of health from non-obvious and minimal cues distinguished individuals according to their religious group membership. These data illustrate how the non-conscious detection of very subtle differences in others' appearances supports cognitively complex judgments such as social categorization
    corecore