1,952 research outputs found

    Muecas: a multi-sensor robotic head for affective human robot interaction and imitation

    Get PDF
    Este artículo presenta una cabeza robótica humanoide multi-sensor para la interacción del robot humano. El diseño de la cabeza robótica, Muecas, se basa en la investigación en curso sobre los mecanismos de percepción e imitación de las expresiones y emociones humanas. Estos mecanismos permiten la interacción directa entre el robot y su compañero humano a través de las diferentes modalidades del lenguaje natural: habla, lenguaje corporal y expresiones faciales. La cabeza robótica tiene 12 grados de libertad, en una configuración de tipo humano, incluyendo ojos, cejas, boca y cuello, y ha sido diseñada y construida totalmente por IADeX (Ingeniería, Automatización y Diseño de Extremadura) y RoboLab. Se proporciona una descripción detallada de su cinemática junto con el diseño de los controladores más complejos. Muecas puede ser controlado directamente por FACS (Sistema de Codificación de Acción Facial), el estándar de facto para reconocimiento y síntesis de expresión facial. Esta característica facilita su uso por parte de plataformas de terceros y fomenta el desarrollo de la imitación y de los sistemas basados en objetivos. Los sistemas de imitación aprenden del usuario, mientras que los basados en objetivos utilizan técnicas de planificación para conducir al usuario hacia un estado final deseado. Para mostrar la flexibilidad y fiabilidad de la cabeza robótica, se presenta una arquitectura de software capaz de detectar, reconocer, clasificar y generar expresiones faciales en tiempo real utilizando FACS. Este sistema se ha implementado utilizando la estructura robótica, RoboComp, que proporciona acceso independiente al hardware a los sensores en la cabeza. Finalmente, se presentan resultados experimentales que muestran el funcionamiento en tiempo real de todo el sistema, incluyendo el reconocimiento y la imitación de las expresiones faciales humanas.This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.Trabajo financiado por: Ministerio de Ciencia e Innovación. Proyecto TIN2012-38079-C03-1 Gobierno de Extremadura. Proyecto GR10144peerReviewe

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    Allocentric emotional affordances in HRI : the multimodal binding

    Get PDF
    The concept of affordance perception is one of the distinctive traits of human cognition; and its application to robots can dramatically improve the quality of human-robot interaction (HRI). In this paper we explore and discuss the idea of "emotional affordances" by proposing a viable model for implementation into HRI; which considers allocentric and multimodal perception. We consider "2-ways" affordances: perceived object triggering an emotion; and perceived human emotion expression triggering an action. In order to make the implementation generic; the proposed model includes a library that can be customised depending on the specific robot and application scenario. We present the AAA (Affordance-Appraisal-Arousal) model; which incorporates Plutchik's Wheel of Emotions; and we outline some numerical examples of how it can be used in different scenarios

    A motion system for social and animated robots

    Get PDF
    This paper presents an innovative motion system that is used to control the motions and animations of a social robot. The social robot Probo is used to study Human-Robot Interactions (HRI), with a special focus on Robot Assisted Therapy (RAT). When used for therapy it is important that a social robot is able to create an "illusion of life" so as to become a believable character that can communicate with humans. The design of the motion system in this paper is based on insights from the animation industry. It combines operator-controlled animations with low-level autonomous reactions such as attention and emotional state. The motion system has a Combination Engine, which combines motion commands that are triggered by a human operator with motions that originate from different units of the cognitive control architecture of the robot. This results in an interactive robot that seems alive and has a certain degree of "likeability". The Godspeed Questionnaire Series is used to evaluate the animacy and likeability of the robot in China, Romania and Belgium

    Behavioural attentiveness patterns analysis – detecting distraction behaviours

    Get PDF
    The capacity of remaining focused on a task can be crucial in some circumstances. In general, this ability is intrinsic in a human social interaction and it is naturally used in any social context. Nevertheless, some individuals have difficulties in remaining concentrated in an activity, resulting in a short attention span. Children with Autism Spectrum Disorder (ASD) are a special example of such individuals. ASD is a group of complex developmental disorders of the brain. Individuals affected by this disorder are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots has already proved to encourage the developing of social interaction skills lacking in children with ASD. However, most of these systems are controlled remotely and cannot adapt automatically to the situation, and even those who are more autonomous still cannot perceive whether or not the user is paying attention to the instructions and actions of the robot. Following this trend, this dissertation is part of a research project that has been under development for some years. In this project, the Robot ZECA (Zeno Engaging Children with Autism) from Hanson Robotics is used to promote the interaction with children with ASD helping them to recognize emotions, and to acquire new knowledge in order to promote social interaction and communication with the others. The main purpose of this dissertation is to know whether the user is distracted during an activity. In the future, the objective is to interface this system with ZECA to consequently adapt its behaviour taking into account the individual affective state during an emotion imitation activity. In order to recognize human distraction behaviours and capture the user attention, several patterns of distraction, as well as systems to automatically detect them, have been developed. One of the most used distraction patterns detection methods is based on the measurement of the head pose and eye gaze. The present dissertation proposes a system based on a Red Green Blue (RGB) camera, capable of detecting the distraction patterns, head pose, eye gaze, blinks frequency, and the user to position towards the camera, during an activity, and then classify the user's state using a machine learning algorithm. Finally, the proposed system is evaluated in a laboratorial and controlled environment in order to verify if it is capable to detect the patterns of distraction. The results of these preliminary tests allowed to detect some system constraints, as well as to validate its adequacy to later use it in an intervention setting.A capacidade de permanecer focado numa tarefa pode ser crucial em algumas circunstâncias. No geral, essa capacidade é intrínseca numa interação social humana e é naturalmente usada em qualquer contexto social. No entanto, alguns indivíduos têm dificuldades em permanecer concentrados numa atividade, resultando num curto período de atenção. Crianças com Perturbações do Espectro do Autismo (PEA) são um exemplo especial de tais indivíduos. PEA é um grupo de perturbações complexas do desenvolvimento do cérebro. Os indivíduos afetados por estas perturbações são caracterizados por padrões repetitivos de comportamento, atividades ou interesses restritos e deficiências na comunicação social. O uso de robôs já provaram encorajar a promoção da interação social e ajudaram no desenvolvimento de competências deficitárias nas crianças com PEA. No entanto, a maioria desses sistemas é controlada remotamente e não consegue-se adaptar automaticamente à situação, e mesmo aqueles que são mais autônomos ainda não conseguem perceber se o utilizador está ou não atento às instruções e ações do robô. Seguindo esta tendência, esta dissertação é parte de um projeto de pesquisa que vem sendo desenvolvido há alguns anos, onde o robô ZECA (Zeno Envolvendo Crianças com Autismo) da Hanson Robotics é usado para promover a interação com crianças com PEA, ajudando-as a reconhecer emoções, adquirir novos conhecimentos para promover a interação social e comunicação com os pares. O principal objetivo desta dissertação é saber se o utilizador está distraído durante uma atividade. No futuro, o objetivo é fazer a interface deste sistema com o ZECA para, consequentemente, adaptar o seu comportamento tendo em conta o estado afetivo do utilizador durante uma atividade de imitação de emoções. A fim de reconhecer os comportamentos de distração humana e captar a atenção do utilizador, vários padrões de distração, bem como sistemas para detetá-los automaticamente, foram desenvolvidos. Um dos métodos de deteção de padrões de distração mais utilizados baseia-se na medição da orientação da cabeça e da orientação do olhar. A presente dissertação propõe um sistema baseado numa câmera Red Green Blue (RGB), capaz de detetar os padrões de distração, orientação da cabeça, orientação do olhar, frequência do piscar de olhos e a posição do utilizador em frente da câmera, durante uma atividade, e então classificar o estado do utilizador usando um algoritmo de “machine learning”. Por fim, o sistema proposto é avaliado num ambiente laboratorial, a fim de verificar se é capaz de detetar os padrões de distração. Os resultados destes testes preliminares permitiram detetar algumas restrições do sistema, bem como validar a sua adequação para posteriormente utilizá-lo num ambiente de intervenção
    corecore