14 research outputs found

    Non-formal Therapy and Learning Potentials through Human Gesture Synchronised to Robotic Gesture

    Get PDF

    Muecas: a multi-sensor robotic head for affective human robot interaction and imitation

    Get PDF
    Este artículo presenta una cabeza robótica humanoide multi-sensor para la interacción del robot humano. El diseño de la cabeza robótica, Muecas, se basa en la investigación en curso sobre los mecanismos de percepción e imitación de las expresiones y emociones humanas. Estos mecanismos permiten la interacción directa entre el robot y su compañero humano a través de las diferentes modalidades del lenguaje natural: habla, lenguaje corporal y expresiones faciales. La cabeza robótica tiene 12 grados de libertad, en una configuración de tipo humano, incluyendo ojos, cejas, boca y cuello, y ha sido diseñada y construida totalmente por IADeX (Ingeniería, Automatización y Diseño de Extremadura) y RoboLab. Se proporciona una descripción detallada de su cinemática junto con el diseño de los controladores más complejos. Muecas puede ser controlado directamente por FACS (Sistema de Codificación de Acción Facial), el estándar de facto para reconocimiento y síntesis de expresión facial. Esta característica facilita su uso por parte de plataformas de terceros y fomenta el desarrollo de la imitación y de los sistemas basados en objetivos. Los sistemas de imitación aprenden del usuario, mientras que los basados en objetivos utilizan técnicas de planificación para conducir al usuario hacia un estado final deseado. Para mostrar la flexibilidad y fiabilidad de la cabeza robótica, se presenta una arquitectura de software capaz de detectar, reconocer, clasificar y generar expresiones faciales en tiempo real utilizando FACS. Este sistema se ha implementado utilizando la estructura robótica, RoboComp, que proporciona acceso independiente al hardware a los sensores en la cabeza. Finalmente, se presentan resultados experimentales que muestran el funcionamiento en tiempo real de todo el sistema, incluyendo el reconocimiento y la imitación de las expresiones faciales humanas.This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.Trabajo financiado por: Ministerio de Ciencia e Innovación. Proyecto TIN2012-38079-C03-1 Gobierno de Extremadura. Proyecto GR10144peerReviewe

    生命維持にかかわる生理現象を介した人間 : ロボットのコミュニケーションと身体情動モデルの設計

    Get PDF
    関西大学In this dissertation, we focus on physiological phenomena of robots as the expressive modality of their inner states and discuss the effectiveness of a robot expressing physiological phenomena, which are indispensable for living. We designed a body-emotion model showing the relationship between a) emotion as the inner state of the robot and b) physiological phenomena as physical changes, and we discuss the communication between humans and robots through involuntary physiological expression based on the model. In recent years, various robots for use in mental health care and communication support in medical/nursing care have been developed. The purpose of these systems is to enable communication between a robot and patients by an active approach of the robot through sound and body movement. In contrast to conventional approaches, our research is based on involuntary emotional expression through physiological phenomena of the robot. Physiological phenomena including breathing, heartbeat, and body temperature are essential functions for life activities, and these are closely related to the inner state of humans because physiological phenomena are caused by the emotional reaction of the limbic system transmitted via the autonomic nervous system. In human-robot communication through physical contact, we consider that physiological phenomena are one of the most important nonverbal modalities of the inner state as involuntary expressions. First, we focused on the robots\u27 expression of physiological phenomena, proposed the body-emotion model (BEM), which concerns the relationship between the inner state of robots and their involuntary physical reactions. We proposed a stuffed-toy robot system: BREAR―which has a mechanical structure to express the breathing, heartbeat, temperature and bodily movement. The result of experiment showed that a heartbeat, breathing and body temperature can express the robot\u27s living state and that the breathing speed is highly related to the robot\u27s emotion of arousal. We reviewed the experimental results and emotional generation mechanisms and discussed the design of the robot based on BEM. Based on our verification results, we determined that the design of the BEM-which involves the perception of the external situation, the matching with the memory, the change of the autonomic nervous parameter and the representation of the physiological phenomena - that is based on the relationship between the autonomic nervous system and emotional arousal is effective. Second, we discussed indirect communication between humans and robots through physiological phenomena - which consist of the breathing, heartbeats and body temperature - that express robots\u27 emotions. We set a situation with joint attention from the robot and user on emotional content and evaluated whether both the user\u27s emotional response to the content and the user\u27s impression of relationship between the user and the robot were changed by the physiological expressions of the robot. The results suggest that the physiological expression of the robot makes the user\u27s own emotions in the experience more excited or suppressed and that the robot\u27s expression increases impressions of closeness and sensitivity. Last, we discussed the future perspective of human-robot communication by physiological phenomena. Regarding the representation of the robots\u27 sense of life, it is thought that the user\u27s recognition that the robot is alive improves not only the moral effect on the understanding of the finiteness of life but also the attachment to the robot in long-term communication. Regarding the emotional expression mechanism based on life, it is expected that the robot can display a complicated internal state close to that of humans by combining intentionally expressed emotions and involuntary emotional expressions. If a robot can express a combination of realistic voluntary expressions, such as facial expressions and body movements, in combination with real involuntary expressions by using the real intentions and lying, it can be said that the robot has a more complicated internal state than that of a pet. By using a robot expressing a living state through physiological phenomena, it can be expected that the effect of mental care will exceed that of animal therapy, and we expect to provide care and welfare support in place of human beings

    Acoustic Echo Cancellation for Human-Robot Communications

    Get PDF
    This master thesis presents a new efficient method of acoustic echo cancellation targeted at speech recognition for robots. The proposed algorithm features a new double-talk detector, an enhanced initialization and a new noise estimation method. The DTD algorithm is based on the normalized cross-correlation method, uses noise power estimation to be more robust in noisy environment and reacts more accurately to double-talk. The new initialization method switches between two different DTD algorithms to prevent problems during filter convergence. The simple, yet robust Geigel DTD is used during adaptive filter convergence, whereas the program switches to the newly developed DTD after convergence. Finally, the new noise estimation algorithm relies on the output auto-correlation to correctly estimate the noise. To improve speech recognition performance, center clipping is applied on the output of the echo canceler, to further remove the residual echo. White noise is also added to the output signal, in order to make the signal power more stable, which helps the speech recognition engine. Evaluation of the proposed algorithm has been done on a large set of sequences and results have shown that the new algorithm can increase the word recognition rate by up to 80%

    Studies on user control in ambient intelligent systems

    Get PDF
    People have a deeply rooted need to experience control and be effective in interactions with their environments. At present times, we are surrounded by intelligent systems that take decisions and perform actions for us. This should make life easier, but there is a risk that users experience less control and reject the system. The central question in this thesis is whether we can design intelligent systems that have a degree of autonomy, while users maintain a sense of control. We try to achieve this by giving the intelligent system an 'expressive interface’: the part that provides information to the user about the internal state, intentions and actions of the system. We examine this question both in the home and the work environment.We find the notion of a ‘system personality’ useful as a guiding principle for designing interactions with intelligent systems, for domestic robots as well as in building automation. Although the desired system personality varies per application, in both domains a recognizable system personality can be designed through expressive interfaces using motion, light, sound, and social cues. The various studies show that the level of automation and the expressive interface can influence the perceived system personality, the perceived level of control, and user’s satisfaction with the system. This thesis shows the potential of the expressive interface as an instrument to help users understand what is going on inside the system and to experience control, which might be essential for the successful adoption of the intelligent systems of the future.<br/
    corecore