255 research outputs found

    Exploring the Design Space of Extra-Linguistic Expression for Robots

    Full text link
    In this paper, we explore the new design space of extra-linguistic cues inspired by graphical tropes used in graphic novels and animation to enhance the expressiveness of social robots. To achieve this, we identified a set of cues that can be used to generate expressions, including smoke/steam/fog, water droplets, and bubbles. We prototyped devices that can generate these fluid expressions for a robot and conducted design sessions where eight designers explored the use and utility of the cues in conveying the robot's internal states in various design scenarios. Our analysis of the 22 designs, the associated design justifications, and the interviews with designers revealed patterns in how each cue was used, how they were combined with nonverbal cues, and where the participants drew their inspiration from. These findings informed the design of an integrated module called EmoPack, which can be used to augment the expressive capabilities of any robot platform

    Machine Learning Driven Emotional Musical Prosody for Human-Robot Interaction

    Get PDF
    This dissertation presents a method for non-anthropomorphic human-robot interaction using a newly developed concept entitled Emotional Musical Prosody (EMP). EMP consists of short expressive musical phrases capable of conveying emotions, which can be embedded in robots to accompany mechanical gestures. The main objective of EMP is to improve human engagement with, and trust in robots while avoiding the uncanny valley. We contend that music - one of the most emotionally meaningful human experiences - can serve as an effective medium to support human-robot engagement and trust. EMP allows for the development of personable, emotion-driven agents, capable of giving subtle cues to collaborators while presenting a sense of autonomy. We present four research areas aimed at developing and understanding the potential role of EMP in human-robot interaction. The first research area focuses on collecting and labeling a new EMP dataset from vocalists, and using this dataset to generate prosodic emotional phrases through deep learning methods. Through extensive listening tests, the collected dataset and generated phrases were validated with a high level of accuracy by a large subject pool. The second research effort focuses on understanding the effect of EMP in human-robot interaction with industrial and humanoid robots. Here, significant results were found for improved trust, perceived intelligence, and likeability of EMP enabled robotic arms, but not for humanoid robots. We also found significant results for improved trust in a social robot, as well as perceived intelligence, creativity and likeability in a robotic musician. The third and fourth research areas shift to broader use cases and potential methods to use EMP in HRI. The third research area explores the effect of robotic EMP on different personality types focusing on extraversion and neuroticism. For robots, personality traits offer a unique way to implement custom responses, individualized to human collaborators. We discovered that humans prefer robots with emotional responses based on high extraversion and low neuroticism, with some correlation between the humans collaborator’s own personality traits. The fourth and final research question focused on scaling up EMP to support interaction between groups of robots and humans. Here, we found that improvements in trust and likeability carried across from single robots to groups of industrial arms. Overall, the thesis suggests EMP is useful for improving trust and likeability for industrial, social and robot musicians but not in humanoid robots. The thesis bears future implications for HRI designers, showing the extensive potential of careful audio design, and the wide range of outcomes audio can have on HRI.Ph.D

    Affective Communication for Socially Assistive Robots (SARs) for Children with Autism Spectrum Disorder: A Systematic Review

    Get PDF
    Research on affective communication for socially assistive robots has been conducted to enable physical robots to perceive, express, and respond emotionally. However, the use of affective computing in social robots has been limited, especially when social robots are designed for children, and especially those with autism spectrum disorder (ASD). Social robots are based on cognitiveaffective models, which allow them to communicate with people following social behaviors and rules. However, interactions between a child and a robot may change or be different compared to those with an adult or when the child has an emotional deficit. In this study, we systematically reviewed studies related to computational models of emotions for children with ASD. We used the Scopus, WoS, Springer, and IEEE-Xplore databases to answer different research questions related to the definition, interaction, and design of computational models supported by theoretical psychology approaches from 1997 to 2021. Our review found 46 articles; not all the studies considered children or those with ASD.This research was funded by VRIEA-PUCV, grant number 039.358/202

    The Design Of A Community-Informed Socially Interactive Humanoid Robot And End-Effectors For Novel Edge-Rolling

    Get PDF
    This dissertation discusses my work in building an HRI platform called Quori and my once separate now integrated work on a manipulation method that can enable robots like Quori, or any more capable robot, to move large circular cylindrical objects. Quori is a novel, affordable, socially interactive humanoid robot platform for facilitating non-contact human-robot interaction (HRI) research. The design of the system is motivated by feedback sampled from the HRI research community. The overall design maintains a balance of affordability and functionality. Ten Quori platforms have been awarded to a diverse group of researchers from across the United States to facilitate HRI research to build a community database from a common platform. This dissertation concludes with a demonstration of Quori transporting a large cylinder for which Quori does not have the power to lift nor the range of motion to dexterously manipulate. Quori is able to achieve this otherwise insurmountable task through a novel robotic manipulation technique called robotic edge-rolling. Edge-rolling refers to transporting a cylindrical object by rolling on its circular edge, as human workers maneuver a gas cylinder on the ground for example. This robotic edge-rolling is achieved by controlling the object to roll on the bottom edge in contact with the ground, and to slide on the surface of the robot\u27s end-effector. It can thus be regarded as a form of robotic dexterous, in-hand manipulation with nonprehensile grasps. This work mainly addresses the problem of grasp planning for edge-rolling by studying how to design appropriately shaped end-effectors with zero internal mobility and how to find feasible grasps for stably rolling the object with the simple end-effectors

    Design for Child-Robot Play The implications of Design Research within the field of Human-Robot Interaction studies for Children

    Get PDF
    This thesis investigates the intersections of three disciplines, that are Design Research, Human-Robot Interaction studies, and Child Studies. In particular, this doctoral research is focused on two research questions, namely, what is (or might be) the role of design research in HRI? And, how to design acceptable and desirable child-robot play applications? The first chapter introduces an overview of the mutual interest between robotics and design that is at the basis of the research. On the one hand, the interest of design toward robotics is documented through some exemplary projects from artists and designers that speculate on the human-robot coexistence condition. Vice versa, the robotics interest toward design is documented by referring to some tracks of robotic conferences, scienti c workshops and robotics journals which focused on the design-robotics relationship. Finally, a brief description of the background conditions that characterized this doctoral research are introduced, such as the fact of being a research founded by a company. The second chapter provides an overview of the state of the art of the intersections between three multidisciplinary disciplines. First, a de nition of Design Research is provided, together with its main trends and open issues. Then, the review focuses on the contribution of Design Research to the HRI eld, which can be summed up in actions focused on three aspects: artefacts, stakeholders, and contexts. This is followed by a focus on the role of Design Research within the context of children studies, in which it is possible to identify two main design-child relationships: design as a method for developing children’s learning experiences; and children as part of the design process for developing novel interactive systems. The third chapter introduces the Research through Design (RtD) approach and its relevance in conducting design research in HRI. The proposed methodology, based on this approach, is particularly characterized by the presence of design explorations as study methods. These, in turn, are developed through a common project’s methodology, also reported in this chapter. The fourth chapter is dedicated to the analysis of the scenario in which the child-robot interaction takes place. This was aimed at understanding what is edutainment robotics for children, its common features, how it relates to existing children play types, and where the interaction takes place. The chapter provides also a focus on the relationship between children and technology on a more general level, through which two themes and relative design opportunities were identi ed: physically active play and objects-to-think-with. These were respectively addressed in the two design explorations presented in this thesis: Phygital Play and Shybo. The Phygital Play project consists of an exploration of natural interaction modalities with robots, through mixed-reality, for fostering children’s active behaviours. To this end, a game platform was developed for allowing children to play with or against a robot, through body movement. Shybo, instead, is a low-anthropomorphic robot for playful learning activities with children that can be carried out in educational contexts. The robot, which reacts to properties of the physical environment, is designed to support different kinds of experiences. Then, the chapter eight is dedicated to the research outcomes, that were de ned through a process of reflection. The contribution of the research was analysed and documented by focusing on three main levels, namely: artefact, knowledge and theory. The artefact level corresponds to the situated implementations developed through the projects. The knowledge level consists of a set of actionable principles, emerged from the results and lessons learned from the projects. At the theory level, a theoretical framework was proposed with the aim of informing the future design of child- robot play applications. Thelastchapterprovidesa naloverviewofthe doctoral research, a series of limitations regarding the research, its process and its outcomes, and some indications for future research

    Robo-ethics design approach for cultural heritage: Case study - Robotics for museum purpose

    Get PDF
    The thesis shows the study behind the design process and the realization of the robotic solution for museum purposes called Virgil. The research started with the literature review on museums management and the critic analysis of signi cant digital experiences in the museum eld. Then, it continues analyzing the museum and its relation with the territory and the cultural heritage. From this preliminary analysis stage, signi cant issue related to museum management analysis comes out: nowadays many museum areas are not accessible to visitors because of issues related to security or architectural barriers. Make explorable these areas is one of the important topics in the cultural debate related to the visiting experience. This rst stage gave the knowledge to develop the outlines which brought to the realization of an ef cient service design then realized following robot ethical design values. One of the pillars of the robot ethical design is the necessity to involve all the stakeholders in the early project phases, for this reason, the second stage of the research was the study of the empathic relations between museum and visitors. In this phase, facilitator factors of this relation are de ned and transformed into guidelines for the product system performances. To perform this stage, it has been necessary create a relation between all the stakeholders of the project, which are: Politecnico di Torino, Tim (Telecom Italia Mobile) JOL CRAB research laboratory and Terre dei Savoia which is the association in charge of the Racconiggi’s Castle, the context scenario of the research. The third stage of the research, provided the realization of a prototype of the robot, in this stage telepresence robot piloted the Museum Guide it is used to show, in real time, the inaccessible areas of the museum enriched with multimedia contents. This stage concludes with the nal test user, from the test session feedback analysis, many of people want to drive themselves the robot. To give an answer to user feedback an interactive game has been developed. The game is based both on the robot ability to be driven by the visitors and also on the capacity of the robot to be used as a platform for the digital telling. To be effective, the whole experience it has been designed and tested with the support of high school students, which are one of the categories less interested in the traditional museum visit. This experience wants to demonstrate that the conscious and ethical use of the robotic device is effectively competitive, in term of performances, with the other solutions of digital visit: because it allows a more interactive digital experience in addition to the satisfaction of the physical visit at the museum

    Human-Robot Interaction architecture for interactive and lively social robots

    Get PDF
    Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones a este problema que se están considerando hoy en día es la introducción de robots en multiples sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz. La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar presentes en todas las interacciones (como el manejo de errores, por ejemplo). La expresividad del robot está basada en el uso de una librería de gestos, o expresiones, multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una articulación). Una de las características más importantes de la arquitectura de expresividad propuesta es la integración de una serie de métodos de modulación que pueden ser usados para modificar los gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado emocional). Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot. Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot. Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot, predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la secuencia de gestos que deben acompañar a la frase del robot. Todos los elementos presentados conforman el núcleo de una arquitectura de interacción humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between the active working and non-working age populations. One of the solutions considered to mitigate this problem is the inclusion of robots in multiple sectors, including the service sector. But for this to be a viable solution, among other features, robots need to be able to interact with humans successfully. This thesis seeks to endow a social robot with the abilities required for a natural human-robot interactions. The main objective is to contribute to the body of knowledge on the area of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on giving roboticists the tools required to develop applications that involve interactions with humans. In particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions between a robot and an user; (ii) endow the robot with the expressive capabilities required for a successful communication; and (iii) endow the robot with a lively appearance. The approach to dialogue modelling presented in this thesis proposes to model dialogues as a sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to solve some of the uncertainties related to interaction. Two dimensions have been used to identify the required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex structures. This approach simplifies the creation of new interactions, by allowing developers to focus exclusively on designing the flow of the dialogue, without having to re-implement functionalities that are common to all dialogues (like error handling, for example). The expressiveness of the robot is based on the use of a library of predefined multimodal gestures, or expressions, modelled as state machines. The module managing the expressiveness receives requests for performing gestures, schedules their execution in order to avoid any possible conflict that might arise, loads them, and ensures that their execution goes without problems. The proposed approach is also able to generate expressions in runtime based on a list of unimodal actions (an utterance, the motion of a limb, etc...). One of the key features of the proposed expressiveness management approach is the integration of a series of modulation techniques that can be used to modify the robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a given situation (which would also increase the variability of the robot expressiveness), and to display different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social encounters, the perception of a social robot as a living entity is a key requirement to foster human-robot interactions. In this dissertation, two approaches have been proposed. The first method generates actions for the different interfaces of the robot at certain intervals. The frequency and intensity of these actions are defined by a signal that represents the pulse of the robot, which can be adapted to the context of the interaction or the internal state of the robot. The second method enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should accompany them, according to the content of the robot’s message, as well as its communicative intention. A deep learning model receives the transcription of the robot’s utterances, predicts which expressions should accompany it, and synchronizes them, so each gesture selected starts at the appropriate time. The model has been developed using a combination of a Long-Short Term Memory network-based encoder and a Conditional Random Field for generating a sequence of gestures that are combined with the robot’s utterance. All the elements presented above conform the core of a modular Human-Robot Interaction architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi

    What do Collaborations with the Arts Have to Say About Human-Robot Interaction?

    Get PDF
    This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan
    corecore