2,265 research outputs found
Culture shapes preschoolersâ emotion recognition but not emotion comprehension: a cross-cultural study in Germany and Singapore
Contemporary approaches suggest that emotions are shaped by culture. Children growing up in different cultures experience culture-specific emotion socialization practices. As a result, children growing up in Western societies (e.g., US or UK) rely on explicit, semantic information, whereas children from East Asian cultures (e.g., China or Japan) are more sensitive towards implicit, contextual cues when confronted with othersâ emotions. The aim of the present study was to investigate two aspects of preschoolersâ emotion understanding (emotion recognition and emotion comprehension) in a cross-cultural setting. To this end, Singaporean and German preschoolers were tested with an emotion recognition task employing European-American and East Asian childâs faces and the Test of Emotion Comprehension (TEC; Pons et al., 2004). In total, 129 German and Singaporean preschoolers (mean age 5.34 years) participated. Results indicate that preschoolers were able to recognize emotions of childâs faces above chance level. In line with previous findings, Singaporean preschoolers were more accurate in recognizing emotions from facial stimuli compared to German preschoolers. Accordingly, Singaporean preschoolers outperformed German preschoolers in the Recognition component of the TEC. The overall performance in TEC did not differ between the two samples. Findings of this study provide further evidence that emotion understanding is culturally shaped in accordance with culture-specific emotion socialization practices
Robotic Faces: Exploring Dynamical Patterns of Social Interaction between Humans and Robots
Thesis (Ph.D.) - Indiana University, Informatics, 2015The purpose of this dissertation is two-fold: 1) to develop an empirically-based design for an interactive robotic face, and 2) to understand how dynamical aspects of social interaction may be leveraged to design better interactive technologies and/or further our understanding of social cognition.
Understanding the role that dynamics plays in social cognition is a challenging problem. This is particularly true in studying cognition via human-robot interaction, which entails both the natural social cognition of the human and the âartificial intelligenceâ of the robot. Clearly, humans who are interacting with other humans (or even other mammals such as dogs) are cognizant of the social nature of the interaction â their behavior in those cases differs from that when interacting with inanimate objects such as tools. Humans (and many other animals) have some awareness of âsocialâ, some sense of other agents. However, it is not clear how or why.
Social interaction patterns vary across culture, context, and individual characteristics of the human interactor. These factors are subsumed into the larger interaction system, influencing the unfolding of the system over time (i.e. the dynamics). The overarching question is whether we can figure out how to utilize factors that influence the dynamics of the social interaction in order to imbue our interactive technologies (robots, clinical AI, decision support systems, etc.) with some "awareness of social", and potentially create more natural interaction paradigms for those technologies.
In this work, we explore the above questions across a range of studies, including lab-based experiments, field observations, and placing autonomous, interactive robotic faces in public spaces. We also discuss future work, how this research relates to making sense of what a robot "sees", creating data-driven models of robot social behavior, and development of robotic face personalities
Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology
The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, âkot-mi-nam (flower-like beautiful guy),â was modeled and analyzed as a case study. The âkot-mi-namâ phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures
Recommended from our members
Emotion recognition in the human face and voice
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonAt a perceptual level, faces and voices consist of very different sensory inputs and therefore, information processing from one modality can be independent of information processing from another modality (Adolphs & Tranel, 1999). However, there may also be a shared neural emotion network that processes stimuli independent of modality (Peelen, Atkinson, & Vuilleumier, 2010) or emotions may be processed on a more abstract cognitive level, based on meaning rather than on perceptual signals. This thesis therefore aimed to examine emotion recognition across two separate modalities in a within-subject design, including a cognitive Chapter 1 with 45 British adults, a developmental Chapter 2 with 54 British children as well as a cross-cultural Chapter 3 with 98 German and British children, and 78 German and British adults. Intensity ratings as well as choice reaction times and correlations of confusion analyses of emotions across modalities were analysed throughout. Further, an ERP Chapter investigated the time-course of emotion recognition across two modalities. Highly correlated rating profiles of emotions in faces and voices were found which suggests a similarity in emotion recognition across modalities. Emotion recognition in primary-school children improved with age for both modalities although young children relied mainly on faces. British as well as German participants showed comparable patterns for rating basic emotions, but subtle differences were also noted and Germans perceived emotions as less intense than British. Overall, behavioural results reported in the present thesis are consistent with the idea of a general, more abstract level of emotion processing which may act independently of modality. This could be based, for example, on a shared emotion brain network or some more general, higher-level cognitive processes which are activated across a range of modalities. Although emotion recognition abilities are already evident during childhood, this thesis argued for a contribution of ânurtureâ to emotion mechanisms as recognition was influenced by external factors such as development and culture.Economics and Social Research Counci
A developmental perspective on the importance of context in emotion perception
Emotion perception is a context sensitive process Barrett et al. (2011). However, the developmental origins of the importance of context in emotion perception remained unspecified till date. The major goal of this dissertation was to extend Barrett et al's. (2011) framework of the importance of context in emotion perception into a developmental domain
Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions
Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/
- âŠ