13,610 research outputs found
Do (and say) as I say: Linguistic adaptation in human-computer dialogs
© Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each other’s vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in human–computer dialogs, based on empirical data collected in a simulated human–computer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in human–computer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for human–computer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the system’s grammar and lexicon
Challenges in Collaborative HRI for Remote Robot Teams
Collaboration between human supervisors and remote teams of robots is highly
challenging, particularly in high-stakes, distant, hazardous locations, such as
off-shore energy platforms. In order for these teams of robots to truly be
beneficial, they need to be trusted to operate autonomously, performing tasks
such as inspection and emergency response, thus reducing the number of
personnel placed in harm's way. As remote robots are generally trusted less
than robots in close-proximity, we present a solution to instil trust in the
operator through a `mediator robot' that can exhibit social skills, alongside
sophisticated visualisation techniques. In this position paper, we present
general challenges and then take a closer look at one challenge in particular,
discussing an initial study, which investigates the relationship between the
level of control the supervisor hands over to the mediator robot and how this
affects their trust. We show that the supervisor is more likely to have higher
trust overall if their initial experience involves handing over control of the
emergency situation to the robotic assistant. We discuss this result, here, as
well as other challenges and interaction techniques for human-robot
collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019
Workshop: The Challenges of Working on Social Robots that Collaborate with
People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing
Systems, May 2019, Glasgow, U
Exploring miscommunication and collaborative behaviour in human-robot interaction
This paper presents the first step in designing a speech-enabled robot that is capable of natural management of miscommunication. It describes the methods
and results of two WOz studies, in which
dyads of naïve participants interacted in a
collaborative task. The first WOz study
explored human miscommunication
management. The second study investigated
how shared visual space and monitoring
shape the processes of feedback and communication in task-oriented interactions.
The results provide insights for the development of human-inspired and
robust natural language interfaces in robots
Recommended from our members
A corpus-based analysis of route instructions in human-robot interaction
This paper investigates how users employ spatial descriptions to navigate a speech-enabled robot. We created a simulated environment in which users gave route instructions in a dialogic real-time interaction with a robot, which was
operated by naïve participants. The ability of robot monitoring was also manipulated in two experimental conditions. The results provide evidence that the content of the instructions and strategies of the users vary depending on the conditions and
demands of the interaction. As expected, the route instructions frequently were underspecified and arbitrary. The findings of
this study elucidate the complexity in interpreting spatial language in HRI. However, they also point to the need for
endowing mobile robots with richer dialogue resources to compensate for the uncertainties arising from language as well
as the environment
A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Picking up objects requested by a human user is a common task in human-robot
interaction. When multiple objects match the user's verbal description, the
robot needs to clarify which object the user is referring to before executing
the action. Previous research has focused on perceiving user's multimodal
behaviour to complement verbal commands or minimising the number of follow up
questions to reduce task time. In this paper, we propose a system for reference
disambiguation based on visualisation and compare three methods to disambiguate
natural language instructions. In a controlled experiment with a YuMi robot, we
investigated real-time augmentations of the workspace in three conditions --
mixed reality, augmented reality, and a monitor as the baseline -- using
objective measures such as time and accuracy, and subjective measures like
engagement, immersion, and display interference. Significant differences were
found in accuracy and engagement between the conditions, but no differences
were found in task time. Despite the higher error rates in the mixed reality
condition, participants found that modality more engaging than the other two,
but overall showed preference for the augmented reality condition over the
monitor and mixed reality conditions
Challenges for an Ontology of Artificial Intelligence
Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be assimilated and regarded as “normal,” and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a 'moving target' for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science
- …