3,831 research outputs found

    Methodology and themes of human-robot interaction: a growing research field

    Get PDF
    Original article can be found at: http://www.intechweb.org/journal.php?id=3 Distributed under the Creative Commons Attribution License. Users are free to read, print, download and use the content or part of it so long as the original author(s) and source are correctly credited.This article discusses challenges of Human-Robot Interaction, which is a highly inter- and multidisciplinary area. Themes that are important in current research in this lively and growing field are identified and selected work relevant to these themes is discussed.Peer reviewe

    Modulating the Non-Verbal Social Signals of a Humanoid Robot

    Get PDF
    In this demonstration we present a repertoire of social signals generated by the humanoid robot Pepper in the context of the EU-funded project MuMMER. The aim of this research is to provide the robot with the expressive capabilities required to interact with people in real-world public spaces such as shopping malls-and being able to control the non-verbal behaviour of such a robot is key to engaging with humans in an effective way. We propose an approach to modulating the non-verbal social signals of the robot based on systematically varying the amplitude and speed of the joint motions and gathering user evaluations of the resulting gestures. We anticipate that the humans' perception of the robot behaviour will be influenced by these modulations

    Video prototyping of dog-inspired non-verbal affective communication for an appearance constrained robot

    Get PDF
    Original article can be found at: http://ieeexplore.ieee.org “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This paper presents results from a video human-robot interaction (VHRI) study in which participants viewed a video in which an appearance-constrained Pioneer robot used dog-inspired affective cues to communicate affinity and relationship with its owner and a guest using proxemics, body movement and orientation and camera orientation. The findings suggest that even with the limited modalities for non-verbal expression offered by a Pioneer robot, which does not have a dog-like appearance, these cues were effective for non-verbal affective communication

    Child–robot relationship formation: A narrative review of empirical research

    Get PDF
    This narrative review aimed to elucidate which robot-related characteristics predict relationship formation between typically-developing children and social robots in terms of closeness and trust. Moreover, we wanted to know to what extent relationship formation can be explained by children’s experiential and cognitive states during interaction with a robot. We reviewed 86 journal articles and conference proceedings published between 2000 and 2017. In terms of predictors, robots’ responsiveness and role, as well as strategic and emotional interaction between robot and child, increased closeness between the child and the robot. Findings about whether robot features predict children’s trust in robots were inconsistent. In terms of children’s experiential and cognitive states during interaction with a robot, robot characteristics and interaction styles were associated with two experiential states: engagement and enjoyment/liking. The literature hardly addressed the impact of experiential and cognitive states on closeness and trust. Comparisons of children’s interactions with robots, adults, and objects showed that robots are perceived as neither animate nor inanimate, and that they are entities with whom children will likely form social relationships. Younger children experienced more enjoyment, were less sensitive to a robot’s interaction style, and were more prone to anthropomorphic tendencies and effects than older children. Tailoring a robot’s sex to that of a child mainly appealed to boys

    Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network

    Full text link
    For a safe, natural and effective human-robot social interaction, it is essential to develop a system that allows a robot to demonstrate the perceivable responsive behaviors to complex human behaviors. We introduce the Multimodal Deep Attention Recurrent Q-Network using which the robot exhibits human-like social interaction skills after 14 days of interacting with people in an uncontrolled real world. Each and every day during the 14 days, the system gathered robot interaction experiences with people through a hit-and-trial method and then trained the MDARQN on these experiences using end-to-end reinforcement learning approach. The results of interaction based learning indicate that the robot has learned to respond to complex human behaviors in a perceivable and socially acceptable manner.Comment: 7 pages, 5 figures, accepted by IEEE-RAS ICRA'1

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    Detecting emotions during a memory training assisted by a social robot for individuals with Mild Cognitive Impairment (MCI)

    Get PDF
    The attention towards robot-assisted therapies (RAT) had grown steadily in recent years particularly for patients with dementia. However, rehabilitation practice using humanoid robots for individuals with Mild Cognitive Impairment (MCI) is still a novel method for which the adherence mechanisms, indications and outcomes remain unclear. An effective computing represents a wide range of technological opportunities towards the employment of emotions to improve human-computer interaction. Therefore, the present study addresses the effectiveness of a system in automatically decode facial expression from video-recorded sessions of a robot-assisted memory training lasted two months involving twenty-one participants. We explored the robot’s potential to engage participants in the intervention and its effects on their emotional state. Our analysis revealed that the system is able to recognize facial expressions from robot-assisted group therapy sessions handling partially occluded faces. Results indicated reliable facial expressiveness recognition for the proposed software adding new evidence base to factors involved in Human-Robot Interaction (HRI). The use of a humanoid robot as a mediating tool appeared to promote the engagement of participants in the training program. Our findings showed positive emotional responses for females. Tasks affects differentially affective involvement. Further studies should investigate the training components and robot responsiveness

    Human-Robot Interaction architecture for interactive and lively social robots

    Get PDF
    Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones a este problema que se están considerando hoy en día es la introducción de robots en multiples sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz. La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar presentes en todas las interacciones (como el manejo de errores, por ejemplo). La expresividad del robot está basada en el uso de una librería de gestos, o expresiones, multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una articulación). Una de las características más importantes de la arquitectura de expresividad propuesta es la integración de una serie de métodos de modulación que pueden ser usados para modificar los gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado emocional). Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot. Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot. Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot, predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la secuencia de gestos que deben acompañar a la frase del robot. Todos los elementos presentados conforman el núcleo de una arquitectura de interacción humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between the active working and non-working age populations. One of the solutions considered to mitigate this problem is the inclusion of robots in multiple sectors, including the service sector. But for this to be a viable solution, among other features, robots need to be able to interact with humans successfully. This thesis seeks to endow a social robot with the abilities required for a natural human-robot interactions. The main objective is to contribute to the body of knowledge on the area of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on giving roboticists the tools required to develop applications that involve interactions with humans. In particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions between a robot and an user; (ii) endow the robot with the expressive capabilities required for a successful communication; and (iii) endow the robot with a lively appearance. The approach to dialogue modelling presented in this thesis proposes to model dialogues as a sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to solve some of the uncertainties related to interaction. Two dimensions have been used to identify the required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex structures. This approach simplifies the creation of new interactions, by allowing developers to focus exclusively on designing the flow of the dialogue, without having to re-implement functionalities that are common to all dialogues (like error handling, for example). The expressiveness of the robot is based on the use of a library of predefined multimodal gestures, or expressions, modelled as state machines. The module managing the expressiveness receives requests for performing gestures, schedules their execution in order to avoid any possible conflict that might arise, loads them, and ensures that their execution goes without problems. The proposed approach is also able to generate expressions in runtime based on a list of unimodal actions (an utterance, the motion of a limb, etc...). One of the key features of the proposed expressiveness management approach is the integration of a series of modulation techniques that can be used to modify the robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a given situation (which would also increase the variability of the robot expressiveness), and to display different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social encounters, the perception of a social robot as a living entity is a key requirement to foster human-robot interactions. In this dissertation, two approaches have been proposed. The first method generates actions for the different interfaces of the robot at certain intervals. The frequency and intensity of these actions are defined by a signal that represents the pulse of the robot, which can be adapted to the context of the interaction or the internal state of the robot. The second method enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should accompany them, according to the content of the robot’s message, as well as its communicative intention. A deep learning model receives the transcription of the robot’s utterances, predicts which expressions should accompany it, and synchronizes them, so each gesture selected starts at the appropriate time. The model has been developed using a combination of a Long-Short Term Memory network-based encoder and a Conditional Random Field for generating a sequence of gestures that are combined with the robot’s utterance. All the elements presented above conform the core of a modular Human-Robot Interaction architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi

    Emotion and mood blending in embodied artificial agents: expressing affective states in the mini social robot

    Get PDF
    Robots that are devised for assisting and interacting with humans are becoming fundamental in many applications, including in healthcare, education, and entertainment. For these robots, the capacity to exhibit affective states plays a crucial role in creating emotional bonding with the user. In this work, we present an affective architecture that grounds biological foundations to shape the affective state of the Mini social robot in terms of mood and emotion blending. The affective state depends upon the perception of stimuli in the environment, which influence how the robot behaves and affectively communicates with other peers. According to research in neuroscience, mood typically rules our affective state in the long run, while emotions do it in the short term, although both processes can overlap. Consequently, the model that is presented in this manuscript deals with emotion and mood blending towards expressing the robot's internal state to the users. Thus, the primary novelty of our affective model is the expression of: (i) mood, (ii) punctual emotional reactions to stimuli, and (iii) the decay that mood and emotion undergo with time. The system evaluation explored whether users can correctly perceive the mood and emotions that the robot is expressing. In an online survey, users evaluated the robot's expressions showing different moods and emotions. The results reveal that users could correctly perceive the robot's mood and emotion. However, emotions were more easily recognized, probably because they are more intense affective states and mainly arise as a stimuli reaction. To conclude the manuscript, a case study shows how our model modulates Mini's expressiveness depending on its affective state during a human-robot interaction scenario.The research leading to these results has received funding from the projects Robots sociales para estimulación física, cognitiva y afectiva de mayores (ROSES) RTI2018-096338-B-I00 funded by Agencia Estatal de Investigación (AEI), Ministerio de Ciencia, Innovación y Universidades and RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by "Programas de Actividades I+D en la Comunidad de Madrid" and cofunded by Structural Funds of the EU. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature
    corecore