238 research outputs found

    Emotional Postures for the Humanoid-Robot Nao

    Get PDF
    This paper presents the development of emotional postures for the humanoid robot Nao. The approach is based on adaptation of the postures that are developed for a virtual human body model to the case of the physical robot Nao. In the paper the association between the joints of the human body model and the joints of the Nao robot are described and the transformation of postures is explained. The non-correspondence between the joints of the actual physical robot and the joints of the human body model was a major challenge in this work. Moreover, the implementation of the postures into the robot was constrained by the physical structure and the artificial mass distribution. Postures for the three emotions of anger, sadness, and happiness are studied. Thirty two postures are generated for each emotion. Among them the best five postures for each emotion are selected based on the votes of twenty five external observers. The distribution of the votes indicates that many of the implemented postures do not convey the intended emotions. The emotional content of the selected best five postures are tested by the votes of forty observers. The intended emotions received the highest recognition rate for each group of these selected postures. This study can be considered to be the last step of a general process for developing emotional postures for robots. This process starts with qualitative descriptions of human postures, continues with encoding those descriptions in quantitative terms, and ends with adaptation of the quantitative values to a specific robot. The present study demonstrates the last step of this proces

    A systematic comparison of affective robot expression modalities

    Get PDF

    Children interpretation of emotional body language displayed by a robot

    Get PDF
    Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3]. Based on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.Peer reviewe

    Robot NAO used in therapy: Advanced design and evaluation

    Get PDF
    Treball de Final de Màster Universitari en Sistemes Intel·ligents. Codi: SIE043. Curs acadèmic 2013-2014Following with the previous work which we have done in the Final Research Project, we introduced a therapeutic application with social robotics to improve the positive mood in patients with fibromyalgia. Different works about therapeutic robotics, positive psychology, emotional intelligence, social learning and mood induction procedures (MIPs) are reviewed. Hardware and software requirements and system development are explained with detail. Conclusions about the clinical utility of these robots are disputed. Nowadays, experiments with real fibromyalgia patients are running, the methodology and procedures which take place in them are described in the future lines section of this work

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Emotion Transfer from Frontline Social Robots to Human Customers During Service Encounters: Testing an Artificial Emotional Contagion Modell

    Get PDF
    This research examines mood transitions during human-robot interactions (HRI) compared with human-human interactions (HHI) during service encounters. Based on emotional contagion and social identity theory, we argue that emotion transmission within HRI (e.g., between a frontline service robot and a human customer) may occur through the imitation of the robot’s verbal and bodily expressions by the customer and may be stronger for negative than for positive emotions. The customer’s positive attitude and anxiety toward robots will further be examined as contingencies that strengthen or weaken the emotion transition during the HRI. We already identified the five most important emotions during service encounters (critical incident study with 131 frontline employees). The subsequent output behavior was programmed to a Nao robot and validated (ratings from 234 students). In the next step, we attempt to manipulate the emotional expressions of a frontline social robot and a customer within an experimental study

    Enhance the Language Ability of Humanoid Robot NAO through Deep Learning to Interact with Autistic Children

    Get PDF
    Autism spectrum disorder (ASD) is a life-long neurological disability, and a cure has not yet been found. ASD begins early in childhood and lasts throughout a person’s life. Through early intervention, many actions can be taken to improve the quality of life of children. Robots are one of the best choices for accompanying children with autism. However, for most robots, the dialogue system uses traditional techniques to produce responses. Robots cannot produce meaningful answers when the conversations have not been recorded in a database. The main contribution of our work is the incorporation of a conversation model into an actual robot system for supporting children with autism. We present the use a neural network model as the generative conversational agent, which aimed at generating meaningful and coherent dialogue responses given the dialogue history. The proposed model shares an embedding layer between the encoding and decoding processes through adoption. The model is different from the canonical Seq2Seq model in which the encoder output is used only to set-up the initial state of the decoder to avoid favoring short and unconditional responses with high prior probability. In order to improve the sensitivity to context, we changed the input method of the model to better adapt to the utterances of children with autism. We adopted transfer learning to make the proposed model learn the characteristics of dialogue with autistic children and to solve the problem of the insufficient corpus of dialogue. Experiments showed that the proposed method was superior to the canonical Seq2sSeq model and the GAN-based dialogue model in both automatic evaluation indicators and human evaluation, including pushing the BLEU precision to 0.23, the greedy matching score to 0.69, the embedding average score to 0.82, the vector extrema score to 0.55, the skip-thought score to 0.65, the KL divergence score to 5.73, and the EMD score to 12.21
    corecore