35 research outputs found

    When facial expressions do and do not signal minds: the role of face inversion, expression dynamism, and emotion type

    Get PDF
    Recent research has linked facial expressions to mind perception. Specifically, Bowling and Banissy (2017) found that ambiguous doll-human morphs were judged as more likely to have a mind when smiling. Herein, we investigate three key potential boundary conditions of this “expression-to-mind” effect. First, we demonstrate that face inversion impairs the ability of happy expressions to signal mindful states in static faces; however, inversion does not disrupt this effect for dynamic displays of emotion. Finally, we demonstrate that not all emotions have equivalent effects. Whereas happy faces generate more mind ascription compared to neutral faces, we find that expressions of disgust actually generate less mind ascription than those of happiness

    Facial Mimicry and Social Context Affect Smile Interpretation

    Get PDF
    Theoretical accounts and extant research suggest that people use various sources of information, including sensorimotor simulation and social context, while judging emotional displays. However, the evidence on how those factors can interplay is limited. The present research tested whether social context information has a greater impact on perceivers’ smile judgments when mimicry is experimentally restricted. In Study 1, participants watched images of affiliative smiles presented with verbal descriptions of situations associated with happiness or politeness. Half the participants could freely move their faces while rating the extent to which the smiles communicated affiliation, whereas for the other half mimicry was restricted via a pen-in-mouth procedure. As predicted, smiles were perceived as more affiliative when the social context was polite than when it was happy. Importantly, the effect of context information was significantly larger among participants who could not freely mimic the facial expressions. In Study 2 we replicated this finding using a different set of stimuli, manipulating context in a within-subjects design, and controlling for empathy and mood. Together, the findings demonstrate that mimicry importantly modulates the impact of social context information on smile perception

    AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones

    Get PDF
    Recent evidence shows that AI-generated faces are now indistinguishable from human faces. However, algorithms are trained disproportionately on White faces, and thus White AI faces may appear especially realistic. In Experiment 1 (N = 124 adults), alongside our reanalysis of previously published data, we showed that White AI faces are judged as human more often than actual human faces-a phenomenon we term AI hyperrealism. Paradoxically, people who made the most errors in this task were the most confident (a Dunning-Kruger effect). In Experiment 2 (N = 610 adults), we used face-space theory and participant qualitative reports to identify key facial attributes that distinguish AI from human faces but were misinterpreted by participants, leading to AI hyperrealism. However, the attributes permitted high accuracy using machine learning. These findings illustrate how psychological theory can inform understanding of AI outputs and provide direction for debiasing AI algorithms, thereby promoting the ethical use of AI

    Human and machine validation of 14 databases of dynamic facial expressions

    Get PDF
    With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other

    Mind Perception of Robots Varies With Their Economic Versus Social Function

    Get PDF
    While robots were traditionally built to achieve economic efficiency and financial profits, their roles are likely to change in the future with the aim to provide social support and companionship. In this research, we examined whether the robot’s proposed function (social vs. economic) impacts judgments of mind and moral treatment. Studies 1a and 1b demonstrated that robots with social function were perceived to possess greater ability for emotional experience, but not cognition, compared to those with economic function and whose function was not mentioned explicitly. Study 2 replicated this finding and further showed that low economic value reduced ascriptions of cognitive capacity, whereas high social value resulted in increased emotion perception. In Study 3, robots with high social value were more likely to be afforded protection from harm, and such effect was related to levels of ascribed emotional experience. Together, the findings demonstrate a dissociation between function type (social vs. economic) and ascribed mind (emotion vs. cognition). In addition, the two types of functions exert asymmetric influences on the moral treatment of robots. Theoretical and practical implications for the field of social psychology and human-computer interaction are discussed

    Dynamics Matter: Recognition of Reward, Affiliative, and Dominance Smiles From Dynamic vs. Static Displays

    Get PDF
    Smiles are distinct and easily recognizable facial expressions, yet they markedly differ in their meanings. According to a recent theoretical account, smiles can be classified based on three fundamental social functions which they serve: expressing positive affect and rewarding self and others (reward smile), creating and maintaining social bonds (affiliative smile), and negotiating social status (dominance smiles) (Niedenthal et al., 2010; Martin et al., 2017). While there is evidence for distinct morphological features of these smiles, their categorization only starts to be investigated in human faces. Moreover, the factors influencing this process – such as facial mimicry or display mode – remain yet unknown. In the present study, we examine the recognition of reward, affiliative, and dominance smiles in static and dynamic portrayals, and explore how interfering with facial mimicry affects such classification. Participants (N = 190) were presented with either static or dynamic displays of the three smile types, whilst their ability to mimic was free or restricted via a pen-in-mouth procedure. For each stimulus they rated the extent to which the expression represents a reward, an affiliative, or a dominance smile. Higher than chance accuracy rates revealed that participants were generally able to differentiate between the three smile types. In line with our predictions, recognition performance was lower in the static than dynamic condition, but this difference was only significant for affiliative smiles. No significant effects of facial muscle restriction were observed, suggesting that the ability to mimic might not be necessary for the distinction between the three functional smiles. Together, our findings support previous evidence on reward, affiliative, and dominance smiles by documenting their perceptual distinctiveness. They also replicate extant observations on the dynamic advantage in expression perception and suggest that this effect may be especially pronounced in the case of ambiguous facial expressions, such as affiliative smiles

    When memory is better for out-group faces: on negative emotions and gender roles

    No full text
    Memory for in-group faces tends to be better than memory for out-group faces. Ackerman et al. (Psychological Science 17:836–840, 2006) found that this effect reverses when male faces display anger, supposedly due to their functional value in signaling intergroup threat. We explored the generalizability of this reverse effect. White participants viewed Black and White male or female faces displaying angry, fearful, or neutral expressions. Recognition accuracy for White male faces was better than for Black male faces when faces were neutral, but this reversed when the faces displayed anger or fear. For female targets, Black faces were generally better recognized than White faces, and female faces were better remembered when they displayed anger rather than fear, whereas male faces were better remembered when they displayed fear rather than anger. These findings are difficult to reconcile with a functional account and suggest (a) that the processing of male out-group faces is influenced by negative emotional expressions in general; and (b) that gender role expectations lead to differential remembering of male and female faces as a function of emotional expressio

    Visual attention mechanisms in happiness versus trustworthiness processing of facial expressions

    No full text
    A happy facial expression makes a person look (more) trustworthy. Do perceptions of happiness and trustworthiness rely on the same face regions and visual attention processes? In an eye-tracking study, eye movements and fixations were recorded while participants judged the un/happiness or the un/trustworthiness of dynamic facial expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. A smiling mouth and happy eyes enhanced perceived happiness and trustworthiness similarly, with a greater contribution of the smile relative to the eyes. This comparable judgement output for happiness and trustworthiness was reached through shared as well as distinct attentional mechanisms: (a) entry times and (b) initial fixation thresholds for each face region were equivalent for both judgements, thereby revealing the same attentional orienting in happiness and trustworthiness processing. However, (c) greater and (d) longer fixation density for the mouth region in the happiness task, and for the eye region in the trustworthiness task, demonstrated different selective attentional engagement. Relatedly, (e) mean fixation duration across face regions was longer in the trustworthiness task, thus showing increased attentional intensity or processing effort
    corecore