47 research outputs found
When facial expressions do and do not signal minds: the role of face inversion, expression dynamism, and emotion type
Recent research has linked facial expressions to mind perception. Specifically, Bowling and Banissy (2017) found that ambiguous doll-human morphs were judged as more likely to have a mind when smiling. Herein, we investigate three key potential boundary conditions of this āexpression-to-mindā effect. First, we demonstrate that face inversion impairs the ability of happy expressions to signal mindful states in static faces; however, inversion does not disrupt this effect for dynamic displays of emotion. Finally, we demonstrate that not all emotions have equivalent effects. Whereas happy faces generate more mind ascription compared to neutral faces, we find that expressions of disgust actually generate less mind ascription than those of happiness
Facial Mimicry and Social Context Affect Smile Interpretation
Theoretical accounts and extant research suggest that people use various sources of information, including sensorimotor simulation and social context, while judging emotional displays. However, the evidence on how those factors can interplay is limited. The present research tested whether social context information has a greater impact on perceiversā smile judgments when mimicry is experimentally restricted. In Study 1, participants watched images of affiliative smiles presented with verbal descriptions of situations associated with happiness or politeness. Half the participants could freely move their faces while rating the extent to which the smiles communicated affiliation, whereas for the other half mimicry was restricted via a pen-in-mouth procedure. As predicted, smiles were perceived as more affiliative when the social context was polite than when it was happy. Importantly, the effect of context information was significantly larger among participants who could not freely mimic the facial expressions. In Study 2 we replicated this finding using a different set of stimuli, manipulating context in a within-subjects design, and controlling for empathy and mood. Together, the findings demonstrate that mimicry importantly modulates the impact of social context information on smile perception
The reciprocal relationship between smiles and situational contexts
Smiles provide information about a social partnerās affect and intentions during social interaction. Although always encountered within a specific situation, the influence of contextual information on smile evaluation has not been widely investigated. Moreover, little is known about the reciprocal effect of smiles on evaluations of their accompanying situations. In this research, we assessed how different smile types and situational contexts affected participantsā social evaluations. In Study 1, 85 participants rated reward, affiliation, and dominance smiles embedded within either enjoyable, polite, or negative (unpleasant) situations. Context had a strong effect on smile ratings, such that smiles in enjoyable situations were rated as more genuine and joyful, as well as indicating less superiority than those in negative situations. In Study 2, 200 participants evaluated the situations that these smiles were perceived within (rather than the smiles themselves). Although situations paired with reward (vs. affiliation) smiles tended to be rated more positively, this effect was absent for negative situations. Ultimately, the findings point toward a reciprocal relationship between smiles and contexts, whereby the face influences evaluations of the situation and vice versa
The reciprocal relationship between smiles and situational contexts
Smiles provide information about a social partnerās affect and intentions during social interaction. Although always encountered within a specific situation, the influence of contextual information on smile evaluation has not been widely investigated. Moreover, little is known about the reciprocal effect of smiles on evaluations of their accompanying situations. In this research, we assessed how different smile types and situational contexts affected participantsā social evaluations. In Study 1, 85 participants rated reward, affiliation, and dominance smiles embedded within either enjoyable, polite, or negative (unpleasant) situations. Context had a strong effect on smile ratings, such that smiles in enjoyable situations were rated as more genuine and joyful, as well as indicating less superiority than those in negative situations. In Study 2, 200 participants evaluated the situations that these smiles were perceived within (rather than the smiles themselves). Although situations paired with reward (vs. affiliation) smiles tended to be rated more positively, this effect was absent for negative situations. Ultimately, the findings point toward a reciprocal relationship between smiles and contexts, whereby the face influences evaluations of the situation and vice versa
Editorial : Dynamic Emotional Communication
Peer reviewedPublisher PD
Acting surprised : comparing perceptions of different dynamic deliberate expressions
People are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoderās ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed
Opportunities and challenges for using automatic human affect analysis in consumer research
The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research
AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones
Recent evidence shows that AI-generated faces are now indistinguishable from human faces. However, algorithms are trained disproportionately on White faces, and thus White AI faces may appear especially realistic. In Experiment 1 (N = 124 adults), alongside our reanalysis of previously published data, we showed that White AI faces are judged as human more often than actual human faces-a phenomenon we term AI hyperrealism. Paradoxically, people who made the most errors in this task were the most confident (a Dunning-Kruger effect). In Experiment 2 (N = 610 adults), we used face-space theory and participant qualitative reports to identify key facial attributes that distinguish AI from human faces but were misinterpreted by participants, leading to AI hyperrealism. However, the attributes permitted high accuracy using machine learning. These findings illustrate how psychological theory can inform understanding of AI outputs and provide direction for debiasing AI algorithms, thereby promoting the ethical use of AI
AI Hyperrealism : Why AI Faces Are Perceived as More Real Than Human Ones
Acknowledgment We thank Sophie J. Nightingale and Hany Farid for providing open access to their stimuli and data. Funding This research is supported by the Australian Government through the Australian Research Councilās Discovery Projects funding scheme (Project No. DP220101026), a TRANSFORM Career Development Fellowship to A. Dawel from the Australian National University College of Health and Medicine, and an Experimental Psychology Society Small Grant to C. A. M. Sutherland. The funders had no role in developing or conducting this research.Peer reviewe
Blocking mimicry makes true and false smiles look the same
Recent research suggests that facial mimicry underlies accurate interpretation of subtle facial expressions. In three experiments, we manipulated mimicry and tested its role in judgments of the genuineness of true and false smiles. Experiment 1 used facial EMG to show that a new mouthguard technique for blocking mimicry modifies both the amount and the time course of facial reactions. In Experiments 2 and 3, participants rated true and false smiles either while wearing mouthguards or when allowed to freely mimic the smiles with or without additional distraction, namely holding a squeeze ball or wearing a finger-cuff heart rate monitor. Results showed that blocking mimicry compromised the decoding of true and false smiles such that they were judged as equally genuine. Together the experiments highlight the role of facial mimicry in judging subtle meanings of facial expressions