129 research outputs found
Food-induced Emotional Resonance Improves Emotion Recognition
The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce – which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one
Data collection in multimodal language and communication research: A flexible decision framework
The contemporary study of human language and communication has expanded beyond its traditional focus on spoken and written forms to incorporate gestures, facial expressions, and sign languages. This shift has been accompanied by methodological advancements that extend beyond classical tools such as tape recorders or video cameras and include motion-tracking systems, depth cameras, and multimodal data fusion techniques. While these tools enable richer empirical insights, they also introduce significant conceptual and practical challenges, particularly for researchers new to multimodal data collection. This paper provides a structured exploration of the methodological workflow essential to multimodal language and communication research. We present a flexible decision-making framework that guides researchers through key considerations, from data selection and its alignment with research questions, to data collection methods, technical requirements, and data management, including ethical considerations and data sharing. We also address critical factors such as equipment choice, data synchronization, and ethical concerns (e.g., privacy and data protection) while illustrating these processes with examples from different research contexts (i.e., lab-based experiments, large-scale annotated corpora, field studies including non-human primates). Rather than advocating a one-size-fits-all approach, our discussion emphasizes key decision points, trade-offs and real-world examples to help researchers navigate the complexities of multimodal data collection. By integrating perspectives from different disciplines, our flexible decision-making framework is intended as a practical tool for newcomers to address common conceptual and methodological challenges in the rapidly developing area of multimodal data collection
Dysfunctions in brain networks supporting empathy: An fMRI study in adults with autism spectrum disorders
The present study aimed at identifying dysfunctions in brain networks that may underlie disturbed empathic behavior in autism spectrum disorders (ASD). During functional magnetic resonance imaging, subjects were asked to identify the emotional state observed in a facial stimulus (other-task) or to evaluate their own emotional response (self-task). Behaviorally, ASD subjects performed equally to the control group during the other-task, but showed less emotionally congruent responses in the self-task. Activations in brain regions related to theory of mind were observed in both groups. Activations of the medial prefrontal cortex (MPFC) were located in dorsal subregions in ASD subjects and in ventral areas in control subjects. During the self-task, ASD subjects activated an additional network of frontal and inferior temporal areas. Frontal areas previously associated with the human mirror system were activated in both tasks in control subjects, while ASD subjects recruited these areas during the self-task only. Activations in the ventral MPFC may provide the basis for one's “emotional bond” with other persons’ emotions. Such atypical patterns of activation may underlie disturbed empathy in individuals with ASD. Subjects with ASD may use an atypical cognitive strategy to gain access to their own emotional state in response to other people's emotions
Advancing our understanding of the neurobiology of anorexia nervosa: translation into treatment
A roadmap for technological innovation in multimodal communication research
Multimodal communication research focuses on how different means of signalling coordinate to communicate effectively. This line of research is traditionally influenced by fields such as cognitive and neuroscience, human-computer interaction, and linguistics. With new technologies becoming available in fields such as natural language processing and computer vision, the field can increasingly avail itself of new ways of analyzing and understanding multimodal communication. As a result, there is a general hope that multimodal research may be at the “precipice of greatness” due to technological advances in computer science and resulting extended empirical coverage. However, for this to come about there must be sufficient guidance on key (theoretical) needs of innovation in the field of multimodal communication. Absent such guidance, the research focus of computer scientists might increasingly diverge from crucial issues in multimodal communication. With this paper, we want to further promote interaction between these fields, which may enormously benefit both communities. The multimodal research community (represented here by a consortium of researchers from the Visual Communication [ViCom] Priority Programme) can engage in the innovation by clearly stating which technological tools are needed to make progress in the field of multimodal communication. In this article, we try to facilitate the establishment of a much needed common ground on feasible expectations (e.g., in terms of terminology and measures to be able to train machine learning algorithms) and to critically reflect possibly idle hopes for technical advances, informed by recent successes and challenges in computer science, social signal processing, and related domains
Prefrontal Cortex Glutamate Correlates with Mental Perspective-Taking
Background: Dysfunctions in theory of mind and empathic abilities have been suggested as core symptoms in major psychiatric disorders including schizophrenia and autism. Since self monitoring, perspective taking and empathy have been linked to prefrontal (PFC) and anterior cingulate cortex (ACC) function, neurotransmitter variations in these areas may account for normal and pathological variations of these functions. Converging evidence indicates an essential role of glutamatergic neurotransmission in psychiatric diseases with pronounced deficits in empathy. However, the role of the glutamate system for different dimensions of empathy has not been investigated so far. Methodology/Principal Findings: Absolute concentrations of cerebral glutamate in the ACC, left dorsolateral PFC and left hippocampus were determined by 3-tesla proton magnetic resonance spectroscopy (1H-MRS) in 17 healthy individuals. Three dimensions of empathy were estimated by a self-rating questionnaire, the Interpersonal Reactivity Index (IRI). Linear regression analysis showed that dorsolateral PFC glutamate concentration was predicted by IRI factor ‘‘perspective taking’’ (T = 22.710, p = 0.018; adjusted alpha-level of 0.017, Bonferroni) but not by ‘‘empathic concern’ ’ or ‘‘personal distress’’. No significant relationship between IRI subscores and the glutamate levels in the ACC or left hippocampus was detected. Conclusions/Significance: This is the first study to investigate the role of the glutamate system for dimensions of theory of mind and empathy. Results are in line with recent concepts that executive top-down control of behavior is mediated b
Yawn Contagion and Empathy in Homo sapiens
The ability to share others' emotions, or empathy, is crucial for complex social interactions. Clinical, psychological, and neurobiological clues suggest a link between yawn contagion and empathy in humans (Homo sapiens). However, no behavioral evidence has been provided so far. We tested the effect of different variables (e.g., country of origin, sex, yawn characteristics) on yawn contagion by running mixed models applied to observational data collected over 1 year on adult (>16 years old) human subjects. Only social bonding predicted the occurrence, frequency, and latency of yawn contagion. As with other measures of empathy, the rate of contagion was greatest in response to kin, then friends, then acquaintances, and lastly strangers. Related individuals (r≥0.25) showed the greatest contagion, in terms of both occurrence of yawning and frequency of yawns. Strangers and acquaintances showed a longer delay in the yawn response (latency) compared to friends and kin. This outcome suggests that the neuronal activation magnitude related to yawn contagion can differ as a function of subject familiarity. In conclusion, our results demonstrate that yawn contagion is primarily driven by the emotional closeness between individuals and not by other variables, such as gender and nationality
fNIRS reproducibility varies with data quality, analysis pipelines, and researcher experience
As data analysis pipelines grow more complex in brain imaging research, understanding how methodological choices affect results is essential for ensuring reproducibility and transparency. This is especially relevant for functional Near-Infrared Spectroscopy (fNIRS), a rapidly growing technique for assessing brain function in naturalistic settings and across the lifespan, yet one that still lacks standardized analysis approaches. In the fNIRS Reproducibility Study Hub (FRESH) initiative, we asked 38 research teams worldwide to independently analyze the same two fNIRS datasets. Despite using different pipelines, nearly 80% of teams agreed on group-level results, particularly when hypotheses were strongly supported by literature. Teams with higher self-reported analysis confidence, which correlated with years of fNIRS experience, showed greater agreement. At the individual level, agreement was lower but improved with better data quality. The main sources of variability were related to how poor-quality data were handled, how responses were modeled, and how statistical analyses were conducted. These findings suggest that while flexible analytical tools are valuable, clearer methodological and reporting standards could greatly enhance reproducibility. By identifying key drivers of variability, this study highlights current challenges and offers direction for improving transparency and reliability in fNIRS research
Gender difference in N170 elicited under oddball task
Background: Some studies have reported gender differences in N170, a face-selective event-related potential (ERP) component. This study investigated gender differences in N170 elicited under oddball paradigm in order to clarify the effect of task demand on gender differences in early facial processing. Findings: Twelve males and 10 females discriminated targets (emotional faces) from non-targets (emotionally neutral faces) under an oddball paradigm, pressing a button as quickly as possible in response to the target. Clear N170 was elicited in response to target and non-target stimuli in both males and females. However, females showed more negative amplitude of N170 in response to target compared with non-target, while males did not show different N170 responses between target and non-target. Conclusions: The present results suggest that females have a characteristic of allocating attention at an early stage when responding to faces actively (target) compared to viewing faces passively (non-target). This supports previous findings suggesting that task demand is an important factor in gender differences in N170
- …
