341 research outputs found
Recognizing Emotions in a Foreign Language
Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables
Emotional persistence in online chatting communities
How do users behave in online chatrooms, where they instantaneously read and
write posts? We analyzed about 2.5 million posts covering various topics in
Internet relay channels, and found that user activity patterns follow known
power-law and stretched exponential distributions, indicating that online chat
activity is not different from other forms of communication. Analysing the
emotional expressions (positive, negative, neutral) of users, we revealed a
remarkable persistence both for individual users and channels. I.e. despite
their anonymity, users tend to follow social norms in repeated interactions in
online chats, which results in a specific emotional "tone" of the channels. We
provide an agent-based model of emotional interaction, which recovers
qualitatively both the activity patterns in chatrooms and the emotional
persistence of users and channels. While our assumptions about agent's
emotional expressions are rooted in psychology, the model allows to test
different hypothesis regarding their emotional impact in online communication.Comment: 34 pages, 4 main and 12 supplementary figure
Towards Real-Time Head Pose Estimation: Exploring Parameter-Reduced Residual Networks on In-the-wild Datasets
Head poses are a key component of human bodily communication and thus a
decisive element of human-computer interaction. Real-time head pose estimation
is crucial in the context of human-robot interaction or driver assistance
systems. The most promising approaches for head pose estimation are based on
Convolutional Neural Networks (CNNs). However, CNN models are often too complex
to achieve real-time performance. To face this challenge, we explore a popular
subgroup of CNNs, the Residual Networks (ResNets) and modify them in order to
reduce their number of parameters. The ResNets are modifed for different image
sizes including low-resolution images and combined with a varying number of
layers. They are trained on in-the-wild datasets to ensure real-world
applicability. As a result, we demonstrate that the performance of the ResNets
can be maintained while reducing the number of parameters. The modified ResNets
achieve state-of-the-art accuracy and provide fast inference for real-time
applicability.Comment: 32nd International Conference on Industrial, Engineering & Other
Applications of Applied Intelligent Systems (IEA/AIE 2019
Towards a Human-Centered Approach for VRET Systems: Case Study for Acrophobia
This paper presents a human-centered methodology for designing and developing Virtual Reality Exposure Therapy (VRET) systems. By following the steps proposed by the methodology – Users analysis, Domain Analysis, Task Analysis and Representational Analysis, we developed a system for acrophobia therapy composed of 9 functional, interrelated modules which are responsible for patients, scenes, audio and graphics management, as well as with physiological monitoring and event triggering. The therapist visualizes in real time the patient’s biophysical signals and adapts the exposure scenario accordingly, as. he can lower or increase the level of exposure. There are 3 scenes in the game, depicting a ride by cable car, one by ski lift and a walk by foot in a mountain landscape. A reward system is implemented and emotion dimension ratings are collected at predefined points in the scenario. They will be stored and later used for constructing an automatic machine learning emotion recognition and exposure adaptation modul
From affect programs to dynamical discrete emotions
According to Discrete Emotion Theory, a number of emotions are distinguishable on the basis of neural, physiological, behavioral and expressive features. Critics of this view emphasize the variability and context-sensitivity of emotions. This paper discusses some of these criticisms, and argues that they do not undermine the claim that emotions are discrete. This paper also presents some works in dynamical affective science, and argues that to conceive of discrete emotions as self-organizing and softly assembled patterns of various processes accounts more naturally than traditional Discrete Emotion Theory for the variability and context-sensitivity of emotions
Darwin's Duchenne: Eye constriction during infant joy and distress
Darwin proposed that smiles with eye constriction (Duchenne smiles) index strong positive emotion in infants, while cry-faces with eye constriction index strong negative emotion. Research has supported Darwin's proposal with respect to smiling, but there has been little parallel research on cry-faces (open-mouth expressions with lateral lip stretching). To investigate the possibility that eye constriction indexes the affective intensity of positive and negative emotions, we first conducted the Face-to-Face/Still-Face (FFSF) procedure at 6 months. In the FFSF, three minutes of naturalistic infant-parent play interaction (which elicits more smiles than cry-faces) are followed by two minutes in which the parent holds an unresponsive still-face (which elicits more cry-faces than smiles). Consistent with Darwin's proposal, eye constriction was associated with stronger smiling and with stronger cry-faces. In addition, the proportion of smiles with eye constriction was higher during the positive-emotion eliciting play episode than during the still-face. In parallel, the proportion of cry-faces with eye constriction was higher during the negative-emotion eliciting still-face than during play. These results are consonant with the hypothesis that eye constriction indexes the affective intensity of both positive and negative facial configurations. A preponderance of eye constriction during cry-faces was observed in a second elicitor of intense negative emotion, vaccination injections, at both 6 and 12 months of age. The results support the existence of a Duchenne distress expression that parallels the more well-known Duchenne smile. This suggests that eye constriction-the Duchenne marker-has a systematic association with early facial expressions of intense negative and positive emotion. © 2013 Mattson et al
Changes in Health Perceptions after Exposure to Human Suffering: Using Discrete Emotions to Understand Underlying Processes
Background: The aim of this study was to examine whether exposure to human suffering is associated with negative changes in perceptions about personal health. We further examined the relation of possible health perception changes, to changes in five discrete emotions (i.e., fear, guilt, hostility/anger, and joviality), as a guide to understand the processes underlying health perception changes, provided that each emotion conveys information regarding triggering conditions. Methodology/Findings: An experimental group (N = 47) was exposed to images of human affliction, whereas a control group (N = 47) was exposed to relaxing images. Participants in the experimental group reported more health anxiety and health value, as well as lower health-related optimism and internal health locus of control, in comparison to participants exposed to relaxing images. They also reported more fear, guilt, hostility and sadness, as well as less joviality. Changes in each health perception were related to changes in particular emotions. Conclusion: These findings imply that health perceptions are shaped in a constant dialogue with the representations about the broader world. Furthermore, it seems that the core of health perception changes lies in the acceptance that personal well-being is subject to several potential threats, as well as that people cannot fully control many of the factors the determine their own well-being
Subjective and objective measures
One of the greatest challenges in the study of emotions and emotional states is their measurement. The techniques used to measure emotions depend essentially on the authors’ definition of the concept of emotion. Currently, two types of measures are used: subjective and objective. While subjective measures focus on assessing the conscious recognition of one’s own emotions, objective measures allow researchers to quantify and assess the conscious and unconscious emotional processes. In this sense, when the objective is to evaluate the emotional experience from the subjective point of view of an individual in relation to a given event, then subjective measures such as self-report should be used. In addition to this, when the objective is to evaluate the emotional experience at the most unconscious level of processes such as the physiological response, objective measures should be used. There are no better or worse measures, only measures that allow access to the same phenomenon from different points of view. The chapter’s main objective is to make a survey of the main measures of evaluation of the emotions and emotional states more relevant in the current scientific panorama.info:eu-repo/semantics/acceptedVersio
Facial expression training optimises viewing strategy in children and adults
This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions
- …