68 research outputs found
Transferability of HRI Research: Potential and Challenges
With advancement of robotics and artificial intelligence, applications for
robotics are flourishing. Human-robot interaction (HRI) is an important area of
robotics as it allows robots to work closer to humans (with them or for them).
One crucial factor for the success of HRI research is transferability, which
refers to the ability of research outputs to be adopted by industry and provide
benefits to society. In this paper, we explore the potentials and challenges of
transferability in HRI research. Firstly, we examine the current state of HRI
research and identify various types of contributions that could lead to
successful outcomes. Secondly, we discuss the potential benefits for each type
of contribution and identify factors that could facilitate industry adoption of
HRI research. However, we also recognize that there are several challenges
associated with transferability, such as the diversity of well-defined
job/skill-sets required from HRI practitioners, the lack of industry-led
research, and the lack of standardization in HRI research methods. We discuss
these challenges and propose potential solutions to bridge the gap between
industry expectations and academic research in HRI.Comment: AAAI Spring Symposium 202
Des Styles pour une Personnalisation de l'Interaction Homme-Robot
Les robots compagnons sont des robots personnels qui ont pour objectif dâaccompagner lâutilisateur dans ses activiteÌs de la vie quotidienne. Nous nous inteÌressons pour notre part aux enfants, accompagneÌs par un robot dans diffeÌrentes situations : travail scolaire, reÌconfort, jeu, protection,...LâacceptabiliteÌ de ces robots compagnons dans le quotidien est questionneÌe dans de nombreux travaux ; elle est lieÌe aÌ son apparence physique, son utiliteÌ, sa faciliteÌ dâutilisation mais aussi aÌ dâautres criteÌres quâil reste aÌ eÌtudier. Un des deÌfis dans la conception de tels robots est de les doter de reÌelles compeÌtences sociales de perception, de raisonnement et dâaction lors de leurs interactions avec lâutilisateur. La recherche dans le domaine de lâinteraction humain- robot se tourne ainsi de plus en plus vers les travaux en Informatique Affective (Affective Computing) pour concevoir des robots personnels plus sociaux, lâeÌmotion eÌtant une dimension centrale dans les interactions.Dâautres dimensions comme la confiance, la leÌgitimiteÌ ou encore la creÌdibiliteÌ du compagnon sont importantes pour leur acceptabiliteÌ. Des travaux ont proposĂ© lâideÌe dâun compagnon âpolyvalentâ capable dâendosser de multiples roÌles et de sâadapter en fonction des besoins de lâutilisateur et en fonction du contexte de la situation. La TheÌorie du Compagnon preÌsenteÌe souleÌve la question des diffeÌrences entre individus influençant la qualiteÌ de lâinteraction et la construction dâune relation entre le compagnon et lâutilisateur. Ce travail sur la personnalisation des robots a pour but de creÌer de la valeur, en particulier pour des parents souhaitant un robot compagnon pour leur enfant, en accord avec leurs propres attentes
Exploring Data Agency and Autonomous Agents as Embodied Data Visualizations
In the light of recent advances in embodied data visualizations, we aim to
shed light on agency in the context of data visualization. To do so, we
introduce Data Agency and Data-Agent Interplay as potential terms and research
focus. Furthermore, we exemplify the former in the context of human-robot
interaction, and identify future challenges and research questions.Comment: 2 pages, 1 figure, Presented as poster at 2023 IEEE Visualization
Conference (VIS
Speech-Gesture GAN: Gesture Generation for Robots and Embodied Agents
Embodied agents, in the form of virtual agents or social robots, are rapidly
becoming more widespread. In human-human interactions, humans use nonverbal
behaviours to convey their attitudes, feelings, and intentions. Therefore, this
capability is also required for embodied agents in order to enhance the quality
and effectiveness of their interactions with humans. In this paper, we propose
a novel framework that can generate sequences of joint angles from the speech
text and speech audio utterances. Based on a conditional Generative Adversarial
Network (GAN), our proposed neural network model learns the relationships
between the co-speech gestures and both semantic and acoustic features from the
speech input. In order to train our neural network model, we employ a public
dataset containing co-speech gestures with corresponding speech audio
utterances, which were captured from a single male native English speaker. The
results from both objective and subjective evaluations demonstrate the efficacy
of our gesture-generation framework for Robots and Embodied Agents.Comment: RO-MAN'23, 32nd IEEE International Conference on Robot and Human
Interactive Communication (RO-MAN), August 2023, Busan, South Kore
Envisioning social drones in education
Education is one of the major application fields in social Human-Robot Interaction. Several forms of social robots have been explored to engage and assist students in the classroom environment, from full-bodied humanoid robots to tabletop robot companions, but flying robots have been left unexplored in this context. In this paper, we present seven online remote workshops conducted with 20 participants to investigate the application area of Education in the Human-Drone Interaction domain; particularly focusing on what roles a social drone could fulfill in a classroom, how it would interact with students, teachers and its environment, what it could look like, and what would specifically differ from other types of social robots used in education. In the workshops we used online collaboration tools, supported by a sketch artist, to help envision a social drone in a classroom. The results revealed several design implications for the roles and capabilities of a social drone, in addition to promising research directions for the development and design in the novel area of drones in education
Une architecture cognitive et affective orientée interaction
National audienceLes robots trouvent de nouvelles applications dans notre vie de tous les jours et interagissent de plus en plus etroi-tement avec leurs utilisateurs humains. Cependant, malgré une longue tradition de recherche, les architectures cogni-tives existantes restent souvent trop génériques et pas as-sez adaptées aux besoins spécifiques de l'Interaction sociale Humain-Robot, comme la gestion des emotions, du langage, des normes sociales, etc. Dans cet article, nous présentons CAIO, une architecture Cognitive et Affective Orientée Interaction. Elle permet aux robots de raisonner sur les etats mentaux (y compris les emotions) et d'agir physiquement, Ž emotionnellement et verbalement
A Cognitive and Affective Architecture for Social Human-Robot Interaction
International audienceRobots show up frequently in new applications in our daily lives where they interact more and more closely with the human user. Despite a long history of research, existing cognitive architectures are still too generic and hence not tailored enough to meet the specific needs demanded by social HRI. In particular, interaction-oriented architectures require handling emotions, language, social norms, etc, which is quite a handful. In this paper, we present an overview of a Cognitive and Affective Interaction-Oriented Architecture for social human-robot interactions abbreviated CAIO. This architecture is parallel to the BDI (Belief, Desire, Intention) architecture that comes from philosophy of actions by Bratman. CAIO integrates complex emotions and planning techniques. It aims to contribute to cognitive architectures for HRI by enabling the robot to reason on mental states (including emotions) of the interlocutors, and to act physically, emotionally and verbally
Permanent Magnet-Assisted Omnidirectional Ball Drive
We present an omnidirectional ball wheel drive design that utilizes a permanent magnet as the drive roller to generate the contact force. Particularly interesting for novel human-mobile robot interaction scenarios where the users are expected to physically interact with many palm-sized robots, our design combines simplicity, low cost and compactness. We first detail our design and explain its key parameters. Then, we present our implementation and compare it with an omniwheel drive built with identical conditions and similar cost. Finally, we elaborate on the main advantages and drawbacks of our design
The Grenoble System for the Social Touch Challenge at ICMI 2015
International audienceNew technologies and especially robotics is going towards more natural user interfaces. Works have been done in different modality of interaction such as sight (visual computing), and audio (speech and audio recognition) but some other modalities are still less researched. The touch modality is one of the less studied in HRI but could be valuable for naturalistic interaction. However touch signals can vary in semantics. It is therefore necessary to be able to recognize touch gestures in order to make human-robot interaction even more natural.We propose a method to recognize touch gestures. This method was developed on the CoST corpus and then directly applied on the HAART dataset as a participation of the Social Touch Challenge at ICMI 2015.Our touch gesture recognition process is detailed in this article to make it reproducible by other research teams.Besides features set description, we manually filtered the training corpus to produce 2 datasets.For the challenge, we submitted 6 different systems.A Support Vector Machine and a Random Forest classifiers for the HAART dataset.For the CoST dataset, the same classifiers are tested in two conditions: using all or filtered training datasets.As reported by organizers, our systems have the best correct rate in this year's challenge (70.91% on HAART, 61.34% on CoST).Our performances are slightly better that other participants but stay under previous reported state-of-the-art results
- âŠ