2,265 research outputs found

    Culture shapes preschoolers’ emotion recognition but not emotion comprehension: a cross-cultural study in Germany and Singapore

    Get PDF
    Contemporary approaches suggest that emotions are shaped by culture. Children growing up in different cultures experience culture-specific emotion socialization practices. As a result, children growing up in Western societies (e.g., US or UK) rely on explicit, semantic information, whereas children from East Asian cultures (e.g., China or Japan) are more sensitive towards implicit, contextual cues when confronted with others’ emotions. The aim of the present study was to investigate two aspects of preschoolers’ emotion understanding (emotion recognition and emotion comprehension) in a cross-cultural setting. To this end, Singaporean and German preschoolers were tested with an emotion recognition task employing European-American and East Asian child’s faces and the Test of Emotion Comprehension (TEC; Pons et al., 2004). In total, 129 German and Singaporean preschoolers (mean age 5.34 years) participated. Results indicate that preschoolers were able to recognize emotions of child’s faces above chance level. In line with previous findings, Singaporean preschoolers were more accurate in recognizing emotions from facial stimuli compared to German preschoolers. Accordingly, Singaporean preschoolers outperformed German preschoolers in the Recognition component of the TEC. The overall performance in TEC did not differ between the two samples. Findings of this study provide further evidence that emotion understanding is culturally shaped in accordance with culture-specific emotion socialization practices

    Robotic Faces: Exploring Dynamical Patterns of Social Interaction between Humans and Robots

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics, 2015The purpose of this dissertation is two-fold: 1) to develop an empirically-based design for an interactive robotic face, and 2) to understand how dynamical aspects of social interaction may be leveraged to design better interactive technologies and/or further our understanding of social cognition. Understanding the role that dynamics plays in social cognition is a challenging problem. This is particularly true in studying cognition via human-robot interaction, which entails both the natural social cognition of the human and the “artificial intelligence” of the robot. Clearly, humans who are interacting with other humans (or even other mammals such as dogs) are cognizant of the social nature of the interaction – their behavior in those cases differs from that when interacting with inanimate objects such as tools. Humans (and many other animals) have some awareness of “social”, some sense of other agents. However, it is not clear how or why. Social interaction patterns vary across culture, context, and individual characteristics of the human interactor. These factors are subsumed into the larger interaction system, influencing the unfolding of the system over time (i.e. the dynamics). The overarching question is whether we can figure out how to utilize factors that influence the dynamics of the social interaction in order to imbue our interactive technologies (robots, clinical AI, decision support systems, etc.) with some "awareness of social", and potentially create more natural interaction paradigms for those technologies. In this work, we explore the above questions across a range of studies, including lab-based experiments, field observations, and placing autonomous, interactive robotic faces in public spaces. We also discuss future work, how this research relates to making sense of what a robot "sees", creating data-driven models of robot social behavior, and development of robotic face personalities

    Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology

    Get PDF
    The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, “kot-mi-nam (flower-like beautiful guy),” was modeled and analyzed as a case study. The “kot-mi-nam” phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures

    Tone and phonation in Southeast Asian languages

    Get PDF

    A developmental perspective on the importance of context in emotion perception

    Get PDF
    Emotion perception is a context sensitive process Barrett et al. (2011). However, the developmental origins of the importance of context in emotion perception remained unspecified till date. The major goal of this dissertation was to extend Barrett et al's. (2011) framework of the importance of context in emotion perception into a developmental domain

    Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

    Get PDF
    Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/
    • 

    corecore