42 research outputs found

    Facial feedback affects valence judgments of dynamic and static emotional expressions

    Get PDF
    The ability to judge others' emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers' facial actions and emotion judgments have reported mixed findings. This issue was investigated by measuring emotion judgments in terms of valence and arousal dimensions while comparing dynamic vs. static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36) were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering) and zygomaticus major muscle (cheek raising). They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions

    Naturalistic Emotion Decoding From Facial Action Sets

    Get PDF
    Researchers have theoretically proposed that humans decode other individuals' emotions or elementary cognitive appraisals from particular sets of facial action units (AUs). However, only a few empirical studies have systematically tested the relationships between the decoding of emotions/appraisals and sets of AUs, and the results are mixed. Furthermore, the previous studies relied on facial expressions of actors and no study used spontaneous and dynamic facial expressions in naturalistic settings. We investigated this issue using video recordings of facial expressions filmed unobtrusively in a real-life emotional situation, specifically loss of luggage at an airport. The AUs observed in the videos were annotated using the Facial Action Coding System. Male participants (n = 98) were asked to decode emotions (e.g., anger) and appraisals (e.g., suddenness) from facial expressions. We explored the relationships between the emotion/appraisal decoding and AUs using stepwise multiple regression analyses. The results revealed that all the rated emotions and appraisals were associated with sets of AUs. The profiles of regression equations showed AUs both consistent and inconsistent with those in theoretical proposals. The results suggest that (1) the decoding of emotions and appraisals in facial expressions is implemented by the perception of set of AUs, and (2) the profiles of such AU sets could be different from previous theories

    Contextual effects on smile perception and recognition memory

    Get PDF
    Most past research has focused on the role played by social context information in emotion classification, such as whether a display is perceived as belonging to one emotion category or another. The current study aims to investigate whether the effect of context extends to the interpretation of emotion displays, i.e. smiles that could be judged either as posed or spontaneous readouts of underlying positive emotion. A between-subjects design (N = 93) was used to investigate the perception and recall of posed smiles, presented together with a happy or polite social context scenario. Results showed that smiles seen in a happy context were judged as more spontaneous than the same smiles presented in a polite context. Also, smiles were misremembered as having more of the physical attributes (i.e., Duchenne marker) associated with spontaneous enjoyment when they appeared in the happy than polite context condition. Together, these findings indicate that social context information is routinely encoded during emotion perception, thereby shaping the interpretation and recognition memory of facial expressions

    Perturbance: Unifying Research on Emotion, Intrusive Mentation and Other Psychological Phenomena with AI

    Get PDF
    Intrusive mentation, rumination, obsession, and worry, referred to by Watkins as "repetitive thought" (RT), are of great interest to psychology. This is partly because every typical adult is subject to "RT". In particular, a critical feature of "RT" is also of transdiagnostic significance—for example obsessive compulsive disorder, insomnia and addictions involve unconstructive "RT". We argue that "RT" cannot be understood in isolation of models of whole minds. Researchers must adopt the designer stance in the tradition of Artificial Intelligence augmented by systematic conceptual analysis. This means developing, exploring and implementing cognitive-affective architectures. Empirical research on "RT" needs to be driven by such theories, and theorizing about "RT" needs to consider such data. We draw attention to H-CogAff theory of mind (motive processing, emotion, etc.) and a class of emotions it posits called perturbance (or tertiary emotions), as a foundation for the research programme we advocate. Briefly, a perturbance is a mental state in which motivators tend to disrupt executive processes. We argue that grief, limerence (the attraction phase of romantic love) and a host of other psychological phenomena involving "RT" should be conceptualized in terms of perturbance and related design-based constructs. We call for new taxonomies of "RT" in terms of information processing architectures such as H-CogAff. We claim general theories of emotion also need to recognize perturbance and other architecture-based aspects of emotion. Meanwhile "cognitive" architectures need to consider requirements of autonomous agency, leading to cognitive affective architectures

    Automatic Recognition of Facial Displays of Unfelt Emotions

    Get PDF
    Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datas

    Rift Racers - Effect of Balancing and Competition on Exertion, Enjoyment, and Motivation in an Immersive Exergame

    Get PDF
    By immersing themselves in a game users may exert themselves more than they would in every day life. One important driving factor in games and many forms of exercise is competition, at once engaging socially in the activity and trying to outdo an opponent or oneself. Large differences in fitness levels make competition infeasible between some opponents, but exergaming can remedy this with the use of balancing via exertion.We developed a fully immersive virtual cycling race and balanced the competition between opponents by scaling their speed according to how close they were to their target heart rate. Incorporating a virtual reality headset and a vibrant 3D world, users were exhilarated and pushed themselves to high levels of exertion. Our results suggest that balanced games can reduce the performance gap between opponents, and might increase motivation and enjoyment for users with lower fitness level. However, heart-rate balancing might be demotivating for very fit users. </p

    Audio-based narratives for the trenches of World War I : intertwining stories, places and interaction for an evocative experience

    Get PDF
    We report in detail the co-design, setup and evaluation of a technological intervention for a complex outdoor heritage site: a World War I fortified camp and trenches located in the natural setting of the Italian Alps. Sound was used as the only means of content delivery as it was considered particularly effective in engaging visitors at an emotional level and had the potential to enhance the physical experience of being at an historical place. The implemented prototype is visitor-aware personalised multi-point auditory narrative system that automatically plays sounds and stories depending on a combination of features such as physical location, visitor proximity and visitor preferences. The curators created for the trail multiple narratives to capture the different voices of the War. The stories are all personal accounts (as opposed to objective and detached reporting of the facts); they were designed to trigger empathy and understanding while leaving the visitors free to interpret the content and the place on the bases of their own understanding and sensitivity. The result is an evocative embodied experience that does not describe the place in a traditional sense, but leaves its interpretation open. It takes visitors beyond the traditional view of heritage as a source of information toward a sensorial experience of feeling the past. A prototype was set up and tested with a group of volunteers showing that a design that carefully combines content design, sound design, tangible and embodied interaction can bring archaeological remains, with very little to see, back to file

    Different Aspects of Emotional Awareness in Relation to Motor Cognition and Autism Traits

    Get PDF
    Data Availability Statement The datasets generated for this study are available on request to the corresponding author. Funding Research was conducted as part of CH’s PhD studies, supported by a studentship from the Northwood Trust. Acknowledgments The authors would like to thank Margaret Jackson for providing testing space and assistance with the ethics procedures.Peer reviewedPublisher PD

    Facial Expressions of Basic Emotions in Japanese Laypeople

    Get PDF
    日本人の表情がエクマンの理論とは異なることを実証 --世界で初めて日本人の基本6感情の表情を報告--. 京都大学プレスリリース. 2019-02-14.Facial expressions that show emotion play an important role in human social interactions. In previous theoretical studies, researchers have suggested that there are universal, prototypical facial expressions specific to basic emotions. However, the results of some empirical studies that tested the production of emotional facial expressions based on particular scenarios only partially supported the theoretical predictions. In addition, all of the previous studies were conducted in Western cultures. We investigated Japanese laypeople (n = 65) to provide further empirical evidence regarding the production of emotional facial expressions. The participants produced facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) in specific scenarios. Under the baseline condition, the participants imitated photographs of prototypical facial expressions. The produced facial expressions were automatically coded using FaceReader in terms of the intensities of emotions and facial action units. In contrast to the photograph condition, where all target emotions were shown clearly, the scenario condition elicited the target emotions clearly only for happy and surprised expressions. The photograph and scenario conditions yielded different profiles for the intensities of emotions and facial action units associated with all of the facial expressions tested. These results provide partial support for the theory of universal, prototypical facial expressions for basic emotions but suggest the possibility that the theory may need to be modified based on empirical evidence

    Non-verbal expression perception and mental state attribution by third parties

    No full text
    Understanding the internal states of others is essential in social exchanges. The aim of the presented thesis is to provide a deeper understanding of the impact nonverbal cues have on the perception of internal states, namely of emotions and associated cognitive appraisals. First, we explored naturalistic behaviour from a hidden camera, described with technical coding systems, FACS for perceived facial muscle movements and a coding scheme defined by ourselves for the hand, arm and torso movements. Participants were asked to judge observed persons' internal states. These descriptions of behaviours and perceptive judgments allow us to make a link between internal state attributions and concrete physical expressions. Second, a novel method was used for expression exploration by transposing naturalistic behaviours to a virtual agent, Greta, which enables a fine tuning of expressions. In order to improve the synchronisation of behaviours the Multimodal Sequential Expression model was created for the Greta agent. Complex expressions were manipulated, one cue at a time, and expression were judged by participants, who were asked to attribute internal states to the agent. Results support the componential approaches to expression, in which particular cues are considered meaningful
    corecore