6,646 research outputs found
Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot
We explore new aspects of assistive living on smart human-robot interaction
(HRI) that involve automatic recognition and online validation of speech and
gestures in a natural interface, providing social features for HRI. We
introduce a whole framework and resources of a real-life scenario for elderly
subjects supported by an assistive bathing robot, addressing health and hygiene
care issues. We contribute a new dataset and a suite of tools used for data
acquisition and a state-of-the-art pipeline for multimodal learning within the
framework of the I-Support bathing robot, with emphasis on audio and RGB-D
visual streams. We consider privacy issues by evaluating the depth visual
stream along with the RGB, using Kinect sensors. The audio-gestural recognition
task on this new dataset yields up to 84.5%, while the online validation of the
I-Support system on elderly users accomplishes up to 84% when the two
modalities are fused together. The results are promising enough to support
further research in the area of multimodal recognition for assistive social
HRI, considering the difficulties of the specific task. Upon acceptance of the
paper part of the data will be publicly available
Study of the Importance of Adequacy to Robot Verbal and Non Verbal Communication in Human-Robot interaction
The Robadom project aims at creating a homecare robot that help and assist
people in their daily life, either in doing task for the human or in managing
day organization. A robot could have this kind of role only if it is accepted
by humans. Before thinking about the robot appearance, we decided to evaluate
the importance of the relation between verbal and nonverbal communication
during a human-robot interaction in order to determine the situation where the
robot is accepted. We realized two experiments in order to study this
acceptance. The first experiment studied the importance of having robot
nonverbal behavior in relation of its verbal behavior. The second experiment
studied the capability of a robot to provide a correct human-robot interaction.Comment: the 43rd Symposium on Robotics - ISR 2012, Taipei : Taiwan, Province
Of China (2012
Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction
A robot needs contextual awareness, effective speech production and
complementing non-verbal gestures for successful communication in society. In
this paper, we present our end-to-end system that tries to enhance the
effectiveness of non-verbal gestures. For achieving this, we identified
prominently used gestures in performances by TED speakers and mapped them to
their corresponding speech context and modulated speech based upon the
attention of the listener. The proposed method utilized Convolutional Pose
Machine [4] to detect the human gesture. Dominant gestures of TED speakers were
used for learning the gesture-to-speech mapping. The speeches by them were used
for training the model. We also evaluated the engagement of the robot with
people by conducting a social survey. The effectiveness of the performance was
monitored by the robot and it self-improvised its speech pattern on the basis
of the attention level of the audience, which was calculated using visual
feedback from the camera. The effectiveness of interaction as well as the
decisions made during improvisation was further evaluated based on the
head-pose detection and interaction survey.Comment: 8 pages, 9 figures, Under review in IRC 201
How Do You Like Me in This: User Embodiment Preferences for Companion Agents
We investigate the relationship between the embodiment of an artificial companion and user perception and interaction with it. In a Wizard of Oz study, 42 users interacted with one of two embodiments: a physical robot or a virtual agent on a screen through a role-play of secretarial tasks in an office, with the companion providing essential assistance. Findings showed that participants in both condition groups when given the choice would prefer to interact with the robot companion, mainly for its greater physical or social presence. Subjects also found the robot less annoying and talked to it more naturally. However, this preference for the robotic embodiment is not reflected in the usersâ actual rating of the companion or their interaction with it. We reflect on this contradiction and conclude that in a task-based context a user focuses much more on a companionâs behaviour than its embodiment. This underlines the feasibility of our efforts in creating companions that migrate between embodiments while maintaining a consistent identity from the userâs point of view
Creating Interaction Scenarios With a New Graphical User Interface
The field of human-centered computing has known a major progress these past
few years. It is admitted that this field is multidisciplinary and that the
human is the core of the system. It shows two matters of concern:
multidisciplinary and human. The first one reveals that each discipline plays
an important role in the global research and that the collaboration between
everyone is needed. The second one explains that a growing number of researches
aims at making the human commitment degree increase by giving him/her a
decisive role in the human-machine interaction. This paper focuses on these
both concerns and presents MICE (Machines Interaction Control in their
Environment) which is a system where the human is the one who makes the
decisions to manage the interaction with the machines. In an ambient context,
the human can decide of objects actions by creating interaction scenarios with
a new visual programming language: scenL.Comment: 5th International Workshop on Intelligent Interfaces for
Human-Computer Interaction, Palerme : Italy (2012
Getting to know Pepper : Effects of peopleâs awareness of a robotâs capabilities on their trust in the robot
© 2018 Association for Computing MachineryThis work investigates how human awareness about a social robotâs capabilities is related to trusting this robot to handle different tasks. We present a user study that relates knowledge on different quality levels to participantâs ratings of trust. Secondary school pupils were asked to rate their trust in the robot after three types of exposures: a video demonstration, a live interaction, and a programming task. The study revealed that the pupilsâ trust is positively affected across different domains after each session, indicating that human users trust a robot more the more awareness about the robot they have
Towards Safe and Trustworthy Social Robots : Ethical Challenges and Practical Issues
Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, K. Dautenhahn, âTowards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issuesâ, paper presented at the 7th International Conference on Social Robotics, Paris, France, 26-30 October, 2015.As robots are increasingly developed to assist humans so- cially with everyday tasks in home and healthcare settings, questions regarding the robot's safety and trustworthiness need to be addressed. The present work investigates the practical and ethical challenges in de- signing and evaluating social robots that aim to be perceived as safe and can win their human users' trust. With particular focus on collaborative scenarios in which humans are required to accept information provided by the robot and follow its suggestions, trust plays a crucial role and is strongly linked to persuasiveness. Accordingly, human-robot trust can directly aect people's willingness to cooperate with the robot, while under- or overreliance may have severe or even dangerous consequences. Problematically, investigating trust and human perceptions of safety in HRI experiments proves challenging in light of numerous ethical con- cerns and risks, which this paper aims to highlight and discuss based on experiences from HRI practice.Peer reviewe
- âŠ