61,507 research outputs found
Extending Human-Robot Relationships Based in Music With Virtual Presence
Social relationships between humans and robots require both long term engagement and a feeling of believability or social presence toward the robot. It is our contention that music can provide the extended engagement that other open-ended interaction studies have failed to do, also, that in combination with the engaging musical interaction, the addition of simulated social behaviors is necessary to trigger this sense of believability or social presence. Building on previous studies with our robot drummer Mortimer that show including social behaviors can increase engagement and social presence, we present the results of a longitudinal study investigating the effect of extending weekly collocated musical improvisation sessions by making Mortimer an active member of the participant's virtual social network. Although, we found the effects of extending the relationship into the virtual world were less pronounced than results we have previously found by adding social modalities to human-robot musical interaction, interesting questions are raised about the interpretation of our automated behavioral metrics across different contexts. Further, we found repeated results of increasingly uninteruppted playing and notable differences in responses to online posts by Mortimer and posts by participant's human friends
Robomorphism: Examining the effects of telepresence robots on between-student cooperation
The global pandemic has stressed the value of working remotely, also in higher education. This development sparks the growing use of telepresence robots, which allow students with prolonged sickness to interact with other students and their teacher remotely. Although telepresence robots are developed to facilitate virtual inclusion, empirical evidence is lacking whether these robots actually enable students to better cooperate with their fellow students compared to other technologies, such as videoconferencing. Therefore, the aim of this research is to compare mediated student interaction supported by a telepresence robot with mediated student interaction supported by videoconferencing. To do so, we conducted an experiment (N = 122) in which participants pairwise and remotely worked together on an assignment, either by using a telepresence robot (N = 58) or by using videoconferencing (N = 64). The findings showed that students that made use of the robot (vs. videoconferencing) experienced stronger feelings of social presence, but also attributed more robotic characteristics to their interaction partner (i.e., robomorphism). Yet, the negative effects of the use of a telepresence robot on cooperation through robomorphism is compensated by the positive effects through social presence. Our study shows that robomorphism is an important concept to consider when studying the effect of human-mediated robot interaction. Designers of telepresence robots should make sure to stimulate social presence, while mitigating possible adverse effects of robomorphism
Force-based control for human-robot cooperative object manipulation
In Physical Human-Robot Interaction (PHRI), humans and robots share the workspace and physically interact and collaborate to perform a common task. However, robots do not have human levels of intelligence or the capacity to adapt in performing collaborative tasks. Moreover, the presence of humans in the vicinity of the robot requires ensuring their safety, both in terms of software and hardware. One of the aspects related to safety is the stability of the human-robot control system, which can be placed in jeopardy due to several factors such as internal time delays. Another aspect is the mutual understanding between humans and robots to prevent conflicts in performing a task. The kinesthetic transmission of the human intention is, in general, ambiguous when an object is involved, and the robot cannot distinguish the human intention to rotate from the intention to translate (the translation/rotation problem).This thesis examines the aforementioned issues related to PHRI. First, the instability arising due to a time delay is addressed. For this purpose, the time delay in the system is modeled with the exponential function, and the effect of system parameters on the stability of the interaction is examined analytically. The proposed method is compared with the state-of-the-art criteria used to study the stability of PHRI systems with similar setups and high human stiffness. Second, the unknown human grasp position is estimated by exploiting the interaction forces measured by a force/torque sensor at the robot end effector. To address cases where the human interaction torque is non-zero, the unknown parameter vector is augmented to include the human-applied torque. The proposed method is also compared via experimental studies with the conventional method, which assumes a contact point (i.e., that human torque is equal to zero). Finally, the translation/rotation problem in shared object manipulation is tackled by proposing and developing a new control scheme based on the identification of the ongoing task and the adaptation of the robot\u27s role, i.e., whether it is a passive follower or an active assistant. This scheme allows the human to transport the object independently in all degrees of freedom and also reduces human effort, which is an important factor in PHRI, especially for repetitive tasks. Simulation and experimental results clearly demonstrate that the force required to be applied by the human is significantly reduced once the task is identified
Do (and say) as I say: Linguistic adaptation in human-computer dialogs
© Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each otherâs vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in humanâcomputer dialogs, based on empirical data collected in a simulated humanâcomputer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in humanâcomputer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for humanâcomputer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the systemâs grammar and lexicon
A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Picking up objects requested by a human user is a common task in human-robot
interaction. When multiple objects match the user's verbal description, the
robot needs to clarify which object the user is referring to before executing
the action. Previous research has focused on perceiving user's multimodal
behaviour to complement verbal commands or minimising the number of follow up
questions to reduce task time. In this paper, we propose a system for reference
disambiguation based on visualisation and compare three methods to disambiguate
natural language instructions. In a controlled experiment with a YuMi robot, we
investigated real-time augmentations of the workspace in three conditions --
mixed reality, augmented reality, and a monitor as the baseline -- using
objective measures such as time and accuracy, and subjective measures like
engagement, immersion, and display interference. Significant differences were
found in accuracy and engagement between the conditions, but no differences
were found in task time. Despite the higher error rates in the mixed reality
condition, participants found that modality more engaging than the other two,
but overall showed preference for the augmented reality condition over the
monitor and mixed reality conditions
How Do You Like Me in This: User Embodiment Preferences for Companion Agents
We investigate the relationship between the embodiment of an artificial companion and user perception and interaction with it. In a Wizard of Oz study, 42 users interacted with one of two embodiments: a physical robot or a virtual agent on a screen through a role-play of secretarial tasks in an office, with the companion providing essential assistance. Findings showed that participants in both condition groups when given the choice would prefer to interact with the robot companion, mainly for its greater physical or social presence. Subjects also found the robot less annoying and talked to it more naturally. However, this preference for the robotic embodiment is not reflected in the usersâ actual rating of the companion or their interaction with it. We reflect on this contradiction and conclude that in a task-based context a user focuses much more on a companionâs behaviour than its embodiment. This underlines the feasibility of our efforts in creating companions that migrate between embodiments while maintaining a consistent identity from the userâs point of view
The perception of emotion in artificial agents
Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
Explorations in engagement for humans and robots
This paper explores the concept of engagement, the process by which
individuals in an interaction start, maintain and end their perceived
connection to one another. The paper reports on one aspect of engagement among
human interactors--the effect of tracking faces during an interaction. It also
describes the architecture of a robot that can participate in conversational,
collaborative interactions with engagement gestures. Finally, the paper reports
on findings of experiments with human participants who interacted with a robot
when it either performed or did not perform engagement gestures. Results of the
human-robot studies indicate that people become engaged with robots: they
direct their attention to the robot more often in interactions where engagement
gestures are present, and they find interactions more appropriate when
engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table
- âŠ