6,815 research outputs found
A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Picking up objects requested by a human user is a common task in human-robot
interaction. When multiple objects match the user's verbal description, the
robot needs to clarify which object the user is referring to before executing
the action. Previous research has focused on perceiving user's multimodal
behaviour to complement verbal commands or minimising the number of follow up
questions to reduce task time. In this paper, we propose a system for reference
disambiguation based on visualisation and compare three methods to disambiguate
natural language instructions. In a controlled experiment with a YuMi robot, we
investigated real-time augmentations of the workspace in three conditions --
mixed reality, augmented reality, and a monitor as the baseline -- using
objective measures such as time and accuracy, and subjective measures like
engagement, immersion, and display interference. Significant differences were
found in accuracy and engagement between the conditions, but no differences
were found in task time. Despite the higher error rates in the mixed reality
condition, participants found that modality more engaging than the other two,
but overall showed preference for the augmented reality condition over the
monitor and mixed reality conditions
FAQchat as in Information Retrieval system
A chatbot is a conversational agent that interacts with users through natural languages. In this paper, we describe a new way to access information using a chatbot. The FAQ in the School of Computing at the University of Leeds has been used to retrain the ALICE chatbot system, producing FAQchat. The results returned from FAQchat are similar to ones generated by search engines such as Google. For evaluation, a comparison was made between FAQchat and Google. The main objective is to demonstrate that FAQchat is a viable alternative to Google and it can be used as a tool to access FAQ databases
Recommended from our members
Ideation as an intellectual information acquisition and use context: Investigating game designersā information-based ideation behavior
Human Information Behavior (HIB) research commonly examines behavior in the context of why information is acquired and how it will be used, but usually at the level of the work or everyday-life tasks the information will support. HIB has not been examined in detail at the broader contextual level of intellectual purpose (i.e. the higher-order conceptual tasks the information was acquired to support). Examination at this level can enhance holistic understanding of HIB as a āmeans to an intellectual endā and inform the design of digital information environments that support information interaction for specific intellectual purposes. We investigate information-based ideation (IBI) as a specific intellectual information acquisition and use context by conducting Critical Incident-style interviews with ten game designers, focusing on how they interact with information to generate and develop creative design ideas. Our findings give rise to a framework of their ideation-focused HIB, which systems designers can leverage to reason about how best to support certain behaviors to drive design ideation. These findings emphasize the importance of intellectual purpose as a driver for acquisition and desired outcome of use
Crowdsourcing a Word-Emotion Association Lexicon
Even though considerable attention has been given to the polarity of words
(positive and negative) and the creation of large polarity lexicons, research
in emotion analysis has had to rely on limited and small emotion lexicons. In
this paper we show how the combined strength and wisdom of the crowds can be
used to generate a large, high-quality, word-emotion and word-polarity
association lexicon quickly and inexpensively. We enumerate the challenges in
emotion annotation in a crowdsourcing scenario and propose solutions to address
them. Most notably, in addition to questions about emotions associated with
terms, we show how the inclusion of a word choice question can discourage
malicious data entry, help identify instances where the annotator may not be
familiar with the target term (allowing us to reject such annotations), and
help obtain annotations at sense level (rather than at word level). We
conducted experiments on how to formulate the emotion-annotation questions, and
show that asking if a term is associated with an emotion leads to markedly
higher inter-annotator agreement than that obtained by asking if a term evokes
an emotion
- ā¦