6,181 research outputs found
An Annotated Corpus of Reference Resolution for Interpreting Common Grounding
Common grounding is the process of creating, repairing and updating mutual
understandings, which is a fundamental aspect of natural language conversation.
However, interpreting the process of common grounding is a challenging task,
especially under continuous and partially-observable context where complex
ambiguity, uncertainty, partial understandings and misunderstandings are
introduced. Interpretation becomes even more challenging when we deal with
dialogue systems which still have limited capability of natural language
understanding and generation. To address this problem, we consider reference
resolution as the central subtask of common grounding and propose a new
resource to study its intermediate process. Based on a simple and general
annotation schema, we collected a total of 40,172 referring expressions in
5,191 dialogues curated from an existing corpus, along with multiple judgements
of referent interpretations. We show that our annotation is highly reliable,
captures the complexity of common grounding through a natural degree of
reasonable disagreements, and allows for more detailed and quantitative
analyses of common grounding strategies. Finally, we demonstrate the advantages
of our annotation for interpreting, analyzing and improving common grounding in
baseline dialogue systems.Comment: 9 pages, 7 figures, 6 tables, Accepted by AAAI 202
Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery
State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control
NLSC: Unrestricted Natural Language-based Service Composition through Sentence Embeddings
Current approaches for service composition (assemblies of atomic services)
require developers to use: (a) domain-specific semantics to formalize services
that restrict the vocabulary for their descriptions, and (b) translation
mechanisms for service retrieval to convert unstructured user requests to
strongly-typed semantic representations. In our work, we argue that effort to
developing service descriptions, request translations, and matching mechanisms
could be reduced using unrestricted natural language; allowing both: (1)
end-users to intuitively express their needs using natural language, and (2)
service developers to develop services without relying on syntactic/semantic
description languages. Although there are some natural language-based service
composition approaches, they restrict service retrieval to syntactic/semantic
matching. With recent developments in Machine learning and Natural Language
Processing, we motivate the use of Sentence Embeddings by leveraging richer
semantic representations of sentences for service description, matching and
retrieval. Experimental results show that service composition development
effort may be reduced by more than 44\% while keeping a high precision/recall
when matching high-level user requests with low-level service method
invocations.Comment: This paper will appear on SCC'19 (IEEE International Conference on
Services Computing) on July 1
- …