9 research outputs found

    Designing robots with the context in mind -- One design does not fit all

    Full text link
    Robots' visual qualities (VQs) impact people's perception of their characteristics and affect users' behaviors and attitudes toward the robot. Recent years point toward a growing need for Socially Assistive Robots (SARs) in various contexts and functions, interacting with various users. Since SAR types have functional differences, the user experience must vary by the context of use, functionality, user characteristics, and environmental conditions. Still, SAR manufacturers often design and deploy the same robotic embodiment for diverse contexts. We argue that the visual design of SARs requires a more scientific approach considering their multiple evolving roles in future society. In this work, we define four contextual layers: the domain in which the SAR exists, the physical environment, its intended users, and the robot's role. Via an online questionnaire, we collected potential users' expectations regarding the desired characteristics and visual qualities of four different SARs: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Results indicated that users' expectations differ regarding the robot's desired characteristics and the anticipated visual qualities for each context and use case.Comment: Accepted to the 15th International Workshop on Human-Friendly Robotic

    Human-Robot Interactions in the Workplace – Key Challenges and Concerns

    Get PDF
    Theoretical background: The use of robots/AI in the workplace has grown rapidly in the last years. There is observed enlargement not only of the numbers of robots but also the quality of their functions and applications. Therefore, many questions of practical, scientific and moral nature have arisen. The flowering use of robots has drawn scientists’ attention to interactions between humans and robots. As a result, a new multidisciplinary research area – Human-Robot Interactions (HRI) – is growing. Representatives of HRI try to answer the questions like: How anthropomorphic features of robots may affect interactions between robots and employees? How are robots supposed to look and behave to make interactions more pleasant for employees? Can human cooperation with humanoid robots lead to the formation of socio-mechanical bonds?Purpose of the article: The paper aims to identify determinants of human-robot interactions in the workplace and identify key research problems in this area.Research methods: The method of a systematic review of the literature fulfiled the above-mentioned purpose. The Web of Science was chosen as the basic database. The list of publications from the Web of Science was supplemented with some other publications which were related to the topic.Main findings: There are several factors that determine the perception and quality of HRI in the workplace. Especially trust, anthropomorphic features of the robot, and organizational assignment may decide about the human acceptance of the use of a non-human agent and HRI. The concept of social interaction with robots is at an initial stage yet. An adopted research paradigm also plays an important role. It seems that the classical assumptions of organizational sociology will not stand the test of time. Researchers and practitioners are facing new challenges. Especially there are some ontological questions that are not easy to be answered unanimously. Can we treat a robot as a mechanical device or rather as a member of a newly created community

    Designing interface agents: Beyond realism, resolution, and the uncanny valley

    Get PDF
    Previous attempts in designing interface agents have been concerned mainly with producing highly realistic-looking animations with emotions that are clearly recognizable. We argue that the choice of visual representation requires consideration of purpose-related psychological processes (i.e., theory of mind) in users. In an evaluation study, four synthetic characters ranging in appearance from non-human to very human (blob, cat, cartoon, human) were evaluated with respect to dispositional traits, mental states, as well as emotions. Results showed that the type of synthetic character strongly influenced what judgment was made. Whilst the blob and cat characters were well liked, attributions of intelligence, mind and complex emotions were found to be reserved more for the human-like counterparts. The findings suggest that independently of questions of realism and clarity of emotional signs, the design of interface agents should be based on attributions the type of character elicits and the function the character is to serve in a particular application

    Exploring Human Teachers' Interpretations of Trainee Robots' Nonverbal Behaviour and Errors

    Get PDF
    In the near future, socially intelligent robots that can learn new tasks from humans may become widely available and gain an opportunity to help people more and more. In order to successfully play a role, not only should intelligent robots be able to interact effectively with humans while they are being taught, but also humans should have the assurance to trust these robots after teaching them how to perform tasks. When human students learn, they usually provide nonverbal cues to display their understanding of and interest in the material. For example, they sometimes nod, make eye contact or show meaningful facial expressions. Likewise, a humanoid robot's nonverbal social cues may enhance the learning process, in case the provided cues are legible for human teachers. To inform designing such nonverbal interaction techniques for intelligent robots, our first work investigates humans' interpretations of nonverbal cues provided by a trainee robot. Through an online experiment (with 167 participants), we examine how different gaze patterns and arm movements with various speeds and different kinds of pauses, displayed by a student robot when practising a physical task, impact teachers' understandings of the robot’s attributes. We show that a robot can appear differently in terms of its confidence, proficiency, eagerness to learn, etc., by systematically adjusting those nonverbal factors. Human students sometimes make mistakes while practising a task, but teachers may be forgiving about them. Intelligent robots are machines, and therefore, they may behave erroneously in certain situations. Our second study examines if human teachers for a robot overlook its small mistakes made when practising a recently taught task, in case the robot has already shown significant improvements. By means of an online rating experiment (with 173 participants), we first determine how severe a robot’s errors in a household task (i.e., preparing food) are perceived. We then use that information to design and conduct another experiment (with 139 participants) in which participants are given the experience of teaching trainee robots. According to our results, perceptions of teachers improve as the robots get better in performing the task. We also show that while bigger errors have a greater negative impact on human teachers' trust compared with the smaller ones, even a small error can significantly destroy trust in a trainee robot. This effect is also correlated with the personality traits of participants. The present work contributes by extending HRI knowledge concerning human teachers’ understandings of robots, in a specific teaching scenario when teachers are observing behaviours that have the primary goal of accomplishing a physical task

    Collaborative Artificial Intelligence Development for Social Robots

    Get PDF
    The main aim of this doctoral thesis was to investigate on how to involve a community for collaborative artificial intelligence (AI) development of a social robot. The work was initiated by the author’s personal interest in developing the Sony AIBO robots that have been unavailable on the retail markets, however, user communities with special interests in these robots remained on the internet. At first, to attract people’s attention, the author developed three specific features for the robot. These consisted of teaching the robot 1) sound event recognition in order to react to environmental audio stimuli, 2) a method to detect the underlying surface under the robot, and 3) of how to recognize its own body states. As this AI development proved to be very challenging, the author decided to start a community project for artificial intelligence development. Community involvement has a long history in open-source software projects and some robotics companies tried to benefit from their userbase in product development. An active online community of Sony AIBO owners was approached to investigate factors to engage its members in the creative processes. For this purpose, 78 Sony AIBO owners were recruited online to fill a questionnaire and their data were analyzed with respect to age, gender, culture, length of ownership, user contribution, and model preference. The results revealed the motives to own these robots for many years and how these heavy users perceived their social robots after a long period in the robot acceptance phase. For example, female participants tended to have more emotional relation to their robots than male who had more technically oriented long-term engagement motivation. The user expectations were also explored by analyzing the answers to this questionnaire to discover the key needs of this user group. The results revealed that the most-wanted skills were the interaction with humans and the autonomous operation. The integration with the AI agents and Internet services was important, but the long-term memory and learning capabilities were not so relevant for the participants. The diverse preferences for robot skills led to creating a prioritized recommendation list to complement the design guidelines for social robots in the literature. In sum, the findings of this thesis showed that developing AI features for an outdated robot is possible but takes a lot of time and shared community efforts. To involve a specific community, one needs first to build up trust by working with and for the community. Also, the trust for the long-term endurance of the development project was found as a precondition for the community commitment. The discoveries of this thesis can be applied to similar types of collaborative AI developments in the future. There are significant contributions in this dissertation to robotics. First, the long-term robot usage was not studied on a years-long scale before and the most extended human-robot interactions analyzed test subjects for only a few months. A questionnaire investigated the robot owners with 1-10+ years-long ownership in this work and their attitude towards robot acceptance. The survey results helped to understand the viable strategies to engage users for a long time. Second, innovative ways were explored to involve online communities in robotics development. The past approaches introduced the community ideas and opinions into product design and innovation iterations. The community in this dissertation tested the developed AI engine, provided inputs for further development directions, created content for the actual AI and gave their feedback about product quality. These contributions advance the social robotics field

    Sustaining Emotional Communication when Interacting with an Android Robot

    Get PDF

    Modélisation du profil émotionnel de l'utilisateur dans les interactions parlées Humain-Machine

    Get PDF
    Les travaux de recherche de la thÚse portent sur l'étude et la formalisation des interactions émotionnelles Humain-Machine. Au delà d une détection d'informations paralinguistiques (émotions, disfluences,...) ponctuelles, il s'agit de fournir au systÚme un profil interactionnel et émotionnel de l'utilisateur dynamique, enrichi pendant l interaction. Ce profil permet d adapter les stratégies de réponses de la machine au locuteur, et il peut également servir pour mieux gérer des relations à long terme. Le profil est fondé sur une représentation multi-niveau du traitement des indices émotionnels et interactionnels extraits à partir de l'audio via les outils de détection des émotions du LIMSI. Ainsi, des indices bas niveau (variations de la F0, d'énergie, etc.), fournissent des informations sur le type d'émotion exprimée, la force de l'émotion, le degré de loquacité, etc. Ces éléments à moyen niveau sont exploités dans le systÚme afin de déterminer, au fil des interactions, le profil émotionnel et interactionnel de l'utilisateur. Ce profil est composé de six dimensions : optimisme, extraversion, stabilité émotionnelle, confiance en soi, affinité et domination (basé sur le modÚle de personnalité OCEAN et les théories de l interpersonal circumplex). Le comportement social du systÚme est adapté en fonction de ce profil, de l'état de la tùche en cours, et du comportement courant du robot. Les rÚgles de création et de mise à jour du profil émotionnel et interactionnel, ainsi que de sélection automatique du comportement du robot, ont été implémentées en logique floue à l'aide du moteur de décision développé par un partenaire du projet ROMEO. L implémentation du systÚme a été réalisée sur le robot NAO. Afin d étudier les différents éléments de la boucle d interaction émotionnelle entre l utilisateur et le systÚme, nous avons participé à la conception de plusieurs systÚmes : systÚme en Magicien d Oz pré-scripté, systÚme semi-automatisé, et systÚme d interaction émotionnelle autonome. Ces systÚmes ont permis de recueillir des données en contrÎlant plusieurs paramÚtres d élicitation des émotions au sein d une interaction ; nous présentons les résultats de ces expérimentations, et des protocoles d évaluation de l Interaction Humain-Robot via l utilisation de systÚmes à différents degrés d autonomie.Analysing and formalising the emotional aspect of the Human-Machine Interaction is the key to a successful relation. Beyond and isolated paralinguistic detection (emotion, disfluences ), our aim consists in providing the system with a dynamic emotional and interactional profile of the user, which can evolve throughout the interaction. This profile allows for an adaptation of the machine s response strategy, and can deal with long term relationships. A multi-level processing of the emotional and interactional cues extracted from speech (LIMSI emotion detection tools) leads to the constitution of the profile. Low level cues ( F0, energy, etc.), are then interpreted in terms of expressed emotion, strength, or talkativeness of the speaker. These mid-level cues are processed in the system so as to determine, over the interaction sessions, the emotional and interactional profile of the user. The profile is made up of six dimensions: optimism, extroversion, emotional stability, self-confidence, affinity and dominance (based on the OCEAN personality model and the interpersonal circumplex theories). The information derived from this profile could allow for a measurement of the engagement of the speaker. The social behaviour of the system is adapted according to the profile, and the current task state and robot behaviour. Fuzzy logic rules drive the constitution of the profile and the automatic selection of the robotic behaviour. These determinist rules are implemented on a decision engine designed by a partner in the project ROMEO. We implemented the system on the humanoid robot NAO. The overriding issue dealt with in this thesis is the viable interpretation of the paralinguistic cues extracted from speech into a relevant emotional representation of the user. We deem it noteworthy to point out that multimodal cues could reinforce the profile s robustness. So as to analyse the different parts of the emotional interaction loop between the user and the system, we collaborated in the design of several systems with different autonomy degrees: a pre-scripted Wizard-of-Oz system, a semi-automated system, and a fully autonomous system. Using these systems allowed us to collect emotional data in robotic interaction contexts, by controlling several emotion elicitation parameters. This thesis presents the results of these data collections, and offers an evaluation protocol for Human-Robot Interaction through systems with various degrees of autonomy.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Uncanniliy Human - Experimental Investigation of the Uncanny Valley Phenomenon

    Get PDF
    Seit seiner EinfĂŒhrung in den wissenschaftlichen Diskurs im Jahr 1970 (Mori, 1970; Mori et al., 2012) ist das Uncanny Valley eine der meist diskutierten und referenzierten Theorien in der Robotik. Obwohl die Theorie vor mehr als 40 Jahren postuliert wurde, wurde sie kaum empirisch untersucht. Erst in den letzten sieben Jahren haben Wissenschaftler aus dem Bereich Robotik, aber auch aus anderen Disziplinen, angefangen, das Uncanny Valley systematischer zu erforschen. Allerdings blieben bisher viele Fragen offen. Einiger dieser Fragen wurden in dem vorliegenden Forschungsprojekt im Rahmen von vier aufeinander aufbauenden Studien untersucht. Der Schwerpunkt der Arbeit liegt auf der systematischen Untersuchung des Einflusses von statischen und dynamischen Merkmalen von Robotern, wie etwa dem Design bzw. Erscheinungsbild und der Bewegung, auf die Wahrnehmung und Evaluation von diesen Robotern. Eine Besonderheit der vorliegenden Arbeit ist der multi-methodologische Ansatz, bei dem die durch verschiedenste Methoden und Messinstrumente beobachteten Effekte auf ihre Relevanz fĂŒr die Uncanny Valley Theorie hin untersucht wurden. Zudem wurden die in der bisherigen Literatur postulierten ErklĂ€rungsansĂ€tze fĂŒr den Uncanny Valley Effekt empirisch getestet. In der ersten Studie wurde anhand von qualitativen Interviews, in denen Probanden Bilder und Videos von humanoiden und androiden Robotern gezeigt wurden, untersucht, wie Probanden sehr menschenĂ€hnliche Roboter evaluieren, ob sie emotionale Reaktionen zeigen, und wie ihre Einstellungen gegenĂŒber diesen Robotern sind. Die Ergebnisse zeigen, dass emotionale Reaktion, wenn ĂŒberhaupt vorhanden, individuell sehr verschieden ausfallen. Das Erscheinungsbild der Roboter war sehr wichtig, denn bestimmte Designmerkmale wurden mit bestimmten FĂ€higkeiten gleichgesetzt. Ein menschliches Erscheinungsbild ohne FunktionalitĂ€t wurde eher negativ bewertet. Zudem schienen die Probanden bei androiden Robotern dieselben MaßstĂ€be zur Bewertung von AttraktivitĂ€t anzulegen wie sie dies bei Menschen tun. Die Analyse zeigte auch die Relevanz der Bewegungen der Roboter und des Kontextes, in welchem der jeweilige Roboter prĂ€sentiert wurde. Es wurde erste Evidenz gefunden fĂŒr die Annahme, dass Menschen Unsicherheit verspĂŒren bei der Kategorisierung von androiden Robotern als entweder Roboter oder Mensch. Zudem fĂŒhlten sich die Probanden unwohl bei dem Gedanken, dass Roboter sie ersetzten könnten. Die zweite Studie untersuchte den Einfluss von robotischer Bewegung. In einem quasi-experimentellen Feldexperiment wurden Passanten mit dem androiden Roboter Geminoid HI-1 konfrontiert, der sich entweder still verhielt oder Bewegungsverhalten zeigte. Die Interaktionen wurden analysiert hinsichtlich des nonverbalen Verhaltens der Passanten (z.B. auf den Roboter gerichtete Aufmerksamkeit, interpersonale Distanz zum Roboter). Die Resultate zeigen, dass das Verhalten der Passanten von dem Verhalten des Roboters beeinflusst wurde, zum Beispiel waren die Interaktionen lĂ€nger, die Probanden stellten mehr Blickkontakt her und testeten die FĂ€higkeiten des Roboters wenn dieser Bewegungsverhalten zeigte. Zudem diente das Verhalten des Roboters als Hinweisreiz fĂŒr die richtige Kategorisierung des Roboters als solchen. Der Aspekt des Erscheinungsbildes wurde in der dritten Studie systematisch untersucht. Zu diesem Zweck wurden in einem webbasierten Fragebogen 40 standardisierte Bilder von Robotern evaluiert, um die Evaluation beeinflussende Designmerkmale zu identifizieren. Eine Clusteranalyse ergab sechs Cluster von Robotern, die auf sechs Dimensionen unterschiedlich bewertet wurden. Mögliche Beziehungen zwischen Designmerkmalen und Evaluationen der Cluster wurden aufgezeigt und diskutiert. Zudem wurde die Aussagekraft des Uncanny Valley Graphen untersucht. Ausgehend von Mori’s Überlegungen ist der Uncanny Valley Effekt eine kubische Funktion. Demnach mĂŒssten sich die Daten am besten durch eine kubische Funktion erklĂ€ren lassen. Die Ergebnisse zeigten allerdings eine bessere Modellpassung fĂŒr lineare oder quadratische ZusammenhĂ€nge. In der letzten Studie wurden perzeptions-orientiert und evolutionsbiologische ErklĂ€rungsansĂ€tze fĂŒr das Uncanny Valley systematisch getestet. In dieser Studie wurden Daten aus Selbstauskunft, Verhaltensdaten und funktionelle Bildgebung kombiniert, um zu untersuchen ob sich die Effekte auf Basis der Selbstauskunft und der Verhaltensdaten erklĂ€ren lassen durch a) zusĂ€tzliche Verarbeitungsleistung wĂ€hrend der Perzeption von Gesichtern, b) automatisch ablaufende Prozesse sozialer Kognition, oder c) eine Überempfindlichkeit des sogenannten Verhaltensimmunsystems (behavioral immune system). Die Ergebnisse unterstĂŒtzen die perzeptions-orientierten ErklĂ€rungen fĂŒr den Uncanny Valley Effekt. Zum einen scheinen die Verhaltenseffekte durch neuronale Prozesse wĂ€hrend der Wahrnehmung von Gesichtern begrĂŒndet zu sein. Zum anderen gibt es Befunde, die auf eine kategoriale Wahrnehmung von Robotern und Menschen hinweisen. Evolutionsbiologische ErklĂ€rungen konnten durch die vorliegenden Daten nicht gestĂŒtzt werden.Since its introduction into scientific discourse in 1970 (Mori, 1970; Mori et al., 2012) the uncanny valley has been a highly discussed and referenced theory in the field of robotics. Although the theory was postulated more than 40 years ago, it has barely been tested empirically. However, in the last seven years robot scientists addressed themselves to the task of investigating the uncanny valley more systematically. But there are still open questions, some of which have been addressed within this research in the course of four consecutive studies. This project focussed on the systematic investigation of how static and dynamic characteristics of robots such as appearance and movement determine evaluations of and behavior towards robots. The work applied a multi-methodological approach and the various observed effects were examined with regard to their importance for the assumed uncanny valley. In addition, previously proposed explanations for the uncanny valley effect were tested. The first study utilized qualitative interviews in which participants were presented with pictures and videos of humanoid and android robots to explore participants’ evaluations of very human-like robots, their attitudes about these robots, and their emotional reactions towards these robots. Results showed that emotional experiences, if existent, were very individual. The robots’ appearance was of great importance for the participants, because certain characteristics were equalized with certain abilities, merely human appearance without a connected functionality was not appreciated, and human rules of attractiveness were applied to the android robots. The analysis also demonstrated the importance of the robots’ movements and the social context they were placed in. First evidence was found supporting the assumption that participants experienced uncertainty how to categorize android robots (as human or machine) and that they felt uncomfortable at the thought to be replaced by robots. The influence of movement, as one of the important factors in the uncanny valley hypothesis, was examined in the second study. In a quasi-experimental observational field study people were confronted with the android robot Geminoid HI-1 either moving or not moving. These interactions between humans and the android robot were analyzed with regard to the participants’ nonverbal behavior (e.g. attention paid to the robot, proximity). Results show that participants’ behavior towards the android robot was influenced by the behavior the robot displayed. For instance, when the robot established eye-contact participants engaged in longer interactions, also established more eye-contact and tried to test the robots’ capabilities. The robot’s behavior served as cue for the participants to categorize the robot as such. The aspect of robot appearances was examined systematically in the third study in order to identify certain robot attractiveness indices or design characteristics which determine how people perceive robots. A web-based survey was conducted with standardized pictures of 40 different mechanoid, humanoid and android robots. A cluster analysis revealed six clusters of robots which were rated significantly different on six dimensions. Possible relationships of design characteristics and the evaluation of robots have been outlined. Moreover, it has been tested whether the data of this study can best be explained by a cubic funtion as would be suggested by the graph proposed by Mori. Results revealed that the data can be best explained by linear or quadratic relationships. The last study systematically tested perception-oriented and evolutionary-biological approaches for the uncanny valley. In this multi-methodological study, self-report and behavioral data were combined with functional magnetic resonance imaging techniques in order to examine whether the observed effects in self-report and behavior occur due to a) additional processing during face perception of human and robotic stimuli, b) automatically elicited processes of social cognition, or c) oversensitivity of the behavioral immune system. The study found strong support for perception-oriented explanations for the uncanny valley effect. First, effects seem to be driven by face perception processes. Further, there were indicators for the assumption that categorical perception takes place. In the contrary, evolutionary-biological driven explanations assuming that uncanny valley related reactions are due to oversensitivity of the behavioral immune system were not supported by this work. Altogether, this dissertation explored the importance of characteristics of robots which are relevant for the uncanny valley hypothesis. Uncanny valley related responses were examined using a variety of measures, for instance, self-reporting, behavior, and brain activation, allowing conclusions with regard to the influence of the choice of measurements on the detection of uncanny valley related responses. Most importantly, explanations for the uncanny valley were tested systematically and support was found for cognitive-oriented and perception-oriented explanations

    A survey on robot appearances

    No full text
    corecore