13 research outputs found

    Presence of life-like robot expressions influences children's enjoyment of human-robot interactions in the field

    Get PDF
    Emotions, and emotional expression, have a broad influence on the interactions we have with others and are thus a key factor to consider in developing social robots. As part of a collaborative EU project, this study examined the impact of lifelike affective facial expressions, in the humanoid robot Zeno, on children's behavior and attitudes towards the robot. Results indicate that robot expressions have mixed effects depending on the gender of the participant. Male participants showed a positive affective response, and indicated greater liking towards the robot, when it made positive and negative affective facial expressions during an interactive game, when compared to the same robot with a neutral expression. Female participants showed no marked difference across two conditions. This is the first study to demonstrate an effect of life-like emotional expression on children's behavior in the field. We discuss the broader implications of these findings in terms of gender differences in HRI, noting the importance of the gender appearance of the robot (in this case, male) and in relation to the overall strategy of the project to advance the understanding of how interactions with expressive robots could lead to task-appropriate symbiotic relationships

    Identification and Evaluation of the Face System of a Child Android Robot Affetto for Surface Motion Design

    Get PDF
    Faces of android robots are one of the most important interfaces to communicate with humans quickly and effectively, as they need to match the expressive capabilities of the human face, it is no wonder that they are complex mechanical systems containing inevitable non-linear and hysteresis elements derived from their non-rigid components. Identifying the input-output response properties of this complex system is necessary to design surface deformations accurately and precisely. However, to date, android faces have been used without careful system identification and thus remain black boxes. In this study, the static responses of three-dimensional displacements were investigated for 116 facial surface points against a discrete trapezoidal input provided to each actuator in the face of a child-type android robot Affetto. The results show that the response curves can be modeled with hysteretical sigmoid functions, and that the response properties of the face actuators, including sensitivity, hysteresis, and dyssynchrony, were quite different. The paper further proposes a design methodology for surface motion patterns based on the obtained response models. Design results thus obtained indicate that the proposed response properties enable us to predict the design results, and that the proposed design methodology can cancel the differences among the response curves of the actuators. The proposed identification and quantitative evaluation method can be applied to advanced android face studies instead of conventional qualitative evaluation methodologies

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Mirroring and recognizing emotions through facial expressions for a Robokind platform

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e ComputadoresFacial expressions play an important role during human social interaction, enabling communicative cues, ascertaining the level of interest or signalling the desire to take a speaking turn. They also give continuous feedback indicating that the information conveyed has been understood. However, certain individuals have difficulties in social interaction in particular verbal and non-verbal communication (e.g. emotions and gestures). Autism Spectrum Disorders (ASD) are a special case of social impairments. Individuals that are affected with ASD are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots had already been proven to encourage the promotion of social interaction and skills in children with ASD. Following this trend, in this work a robotic platform is used as a mediator in the social interaction activities with children with special needs. The main purpose of this dissertation is to develop a system capable of automatic detecting emotions through facial expressions and interfacing it with a robotic platform in order to allow social interaction with children with special needs. The proposed experimental setup uses the Intel RealSense 3D camera and the Zeno R50 Robokind robotic platform. This layout has two subsystems, a Mirroring Emotion System (MES) and an Emotion Recognition System (ERS). The first subsystem (MES) is capable of synthetizing human emotions through facial expressions, on-line. The other subsystem (ERS) is able to recognize human emotions through facial features in real time. MES extracts the user facial Action Units (AUs), sends the data to the robot allowing on-line imitation. ERS uses Support Vector Machine (SVM) technique to automatic classify the emotion expressed by the User in real time. Finally, the proposed subsystems, MES and ERS, were evaluated in a laboratorial and controlled environment in order to check the integration and operation of the systems. Then, both subsystems were tested in a school environment in different configurations. The results of these preliminary tests allowed to detect some constraints of the system, as well as validate its adequacy in an intervention setting.As expressões faciais desempenham um papel importante na interação social, permitindo fornecer pistas comunicativas, conhecer o nível de interesse ou sinalizar o desejo de falar. No entanto, algumas pessoas têm dificuldades na interação social, em particular, na comunicação verbal e não-verbal (por exemplo, emoções e gestos). As Perturbações do Espectro do Autismo (PEA) são um caso especial de transtorno e dificuldades sociais. Os indivíduos que são afetados com PEA são caracterizados por padrões repetitivos de comportamento, atividades e interesses restritos e possuem deficiências na comunicação social. A utilização de robôs para incentivar a promoção da interação social e habilidades em crianças com PEA tem sido apresentada na literatura. Seguindo essa tendência, neste trabalho uma plataforma robótica é utilizada como um mediador nas atividades de interação social com crianças com necessidades especiais. O objetivo principal desta dissertação é desenvolver um sistema capaz de detetar automaticamente emoções através de expressões faciais e fazer interface com uma plataforma robótica, a fim de permitir uma interação social com crianças com necessidades especiais. O trabalho experimental proposto utiliza a câmara Intel RealSense 3D e a plataforma robótica Zeno R50 Robokind. Este esquema possui dois subsistemas, um sistema de imitação de expressões faciais (MES) e um sistema de reconhecimentos de emoções (ERS). O primeiro subsistema (MES) é capaz de sintetizar on-line as emoções humanas através de expressões faciais. O subsistema ERS é capaz de reconhecer em tempo-real emoções humanas através de características faciais. O MES extrai as Unidades de Ação faciais do utilizador (UAs), envia os dados para o robô permitindo imitação on-line. O ERS utiliza Support Vector Machine (SVM) para automaticamente classificar a emoção exibida pelo utilizador. Finalmente, os subsistemas propostos, MES e ERS, foram avaliados num ambiente laboratorial e controlado, a fim de verificar a integração e a operação de ambos. Em seguida, os subsistemas foram testados num ambiente escolar em diferentes configurações. Os resultados destes testes preliminares permitiram detetar algumas limitações do sistema, bem como validar a sua adequação na intervenção com crianças com necessidades especiais

    The effects of robot facial emotional expressions and gender on child-robot interaction in a field study

    Get PDF
    Emotions, and emotional expression, have a broad influence on social interactions and are thus a key factor to consider in developing social robots. This study examined the impact of life-like affective facial expressions, in the humanoid robot Zeno, on children’s behaviour and attitudes towards the robot. Results indicate that robot expressions have mixed effects depending on participant gender. Male participants interacting with a responsive facially expressive robot showed a positive affective response and indicated greater liking towards the robot, compared to those interacting with the same robot maintaining a neutral expression. Female participants showed no marked difference across the conditions. We discuss the broader implications of these findings in terms of gender differences in human–robot interaction, noting the importance of the gender appearance in robots (in this case, male) and in relation to advancing the understanding of how interactions with expressive robots could lead to task-appropriate symbiotic relationships

    Facial emotion expressions in human-robot interaction: A survey

    Get PDF
    Facial expressions are an ideal means of communicating one's emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real time will be covered. For robotic facial expression generation, hand coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real time is comparatively lower. In case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically.Comment: Pre-print version. Accepted in International Journal of Social Robotic

    Sustaining Emotional Communication when Interacting with an Android Robot

    Get PDF

    Human emotions toward stimuli in the uncanny valley: laddering and index construction

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Human-looking computer interfaces, including humanoid robots and animated humans, may elicit in their users eerie feelings. This effect, often called the uncanny valley, emphasizes our heightened ability to distinguish between the human and merely humanlike using both perceptual and cognitive approaches. Although reactions to uncanny characters are captured more accurately with emotional descriptors (e.g., eerie and creepy) than with cognitive descriptors (e.g., strange), and although previous studies suggest the psychological processes underlying the uncanny valley are more perceptual and emotional than cognitive, the deep roots of the concept of humanness imply the application of category boundaries and cognitive dissonance in distinguishing among robots, androids, and humans. First, laddering interviews (N = 30) revealed firm boundaries among participants’ concepts of animated, robotic, and human. Participants associated human traits like soul, imperfect, or intended exclusively with humans, and they simultaneously devalued the autonomous accomplishments of robots (e.g., simple task, limited ability, or controlled). Jerky movement and humanlike appearance were associated with robots, even though the presented robotic stimuli were humanlike. The facial expressions perceived in robots as improper were perceived in animated characters as mismatched. Second, association model testing indicated that the independent evaluation based on the developed indices is a viable quantitative technique for the laddering interview. Third, from the interviews several candidate items for the eeriness index were validated in a large representative survey (N = 1,311). The improved eeriness index is nearly orthogonal to perceived humanness (r = .04). The improved indices facilitate plotting relations among rated characters of varying human likeness, enhancing perspectives on humanlike robot design and animation creation
    corecore