43 research outputs found
Abstract Meaning Representation for Human-Robot Dialogue
In this research, we begin to tackle the
challenge of natural language understanding
(NLU) in the context of the development of
a robot dialogue system. We explore the adequacy
of Abstract Meaning Representation
(AMR) as a conduit for NLU. First, we consider
the feasibility of using existing AMR
parsers for automatically creating meaning
representations for robot-directed transcribed
speech data. We evaluate the quality of output
of two parsers on this data against a manually
annotated gold-standard data set. Second,
we evaluate the semantic coverage and distinctions
made in AMR overall: how well does it
capture the meaning and distinctions needed
in our collaborative human-robot dialogue domain?
We find that AMR has gaps that align
with linguistic information critical for effective
human-robot collaboration in search and
navigation tasks, and we present task-specific
modifications to AMR to address the deficiencies
Thirty Musts for Meaning Banking
Meaning banking--creating a semantically annotated corpus for the purpose of
semantic parsing or generation--is a challenging task. It is quite simple to
come up with a complex meaning representation, but it is hard to design a
simple meaning representation that captures many nuances of meaning. This paper
lists some lessons learned in nearly ten years of meaning annotation during the
development of the Groningen Meaning Bank (Bos et al., 2017) and the Parallel
Meaning Bank (Abzianidze et al., 2017). The paper's format is rather
unconventional: there is no explicit related work, no methodology section, no
results, and no discussion (and the current snippet is not an abstract but
actually an introductory preface). Instead, its structure is inspired by work
of Traum (2000) and Bender (2013). The list starts with a brief overview of the
existing meaning banks (Section 1) and the rest of the items are roughly
divided into three groups: corpus collection (Section 2 and 3, annotation
methods (Section 4-11), and design of meaning representations (Section 12-30).
We hope this overview will give inspiration and guidance in creating improved
meaning banks in the future.Comment: https://www.aclweb.org/anthology/W19-3302
Widely Interpretable Semantic Representation: Frameless Meaning Representation for Broader Applicability
This paper presents a novel semantic representation, WISeR, that overcomes
challenges for Abstract Meaning Representation (AMR). Despite its strengths,
AMR is not easily applied to languages or domains without predefined semantic
frames, and its use of numbered arguments results in semantic role labels,
which are not directly interpretable and are semantically overloaded for
parsers. We examine the numbered arguments of predicates in AMR and convert
them to thematic roles that do not require reference to semantic frames. We
create a new corpus of 1K English dialogue sentences annotated in both WISeR
and AMR. WISeR shows stronger inter-annotator agreement for beginner and
experienced annotators, with beginners becoming proficient in WISeR annotation
more quickly. Finally, we train a state-of-the-art parser on the AMR 3.0 corpus
and a WISeR corpus converted from AMR 3.0. The parser is evaluated on these
corpora and our dialogue corpus. The WISeR model exhibits higher accuracy than
its AMR counterpart across the board, demonstrating that WISeR is easier for
parsers to learn
How much of UCCA can be predicted from AMR?
International audienceIn this paper, we consider two of the currently popular semantic frameworks: Abstract Meaning Representation (AMR)a more abstract framework, and Universal Conceptual Cognitive Annotation (UCCA)-an anchored framework. We use a corpus-based approach to build two graph rewriting systems, a deterministic and a non-deterministic one, from the former to the latter framework. We present their evaluation and a number of ambiguities that we discovered while building our rules. Finally, we provide a discussion and some future work directions in relation to comparing semantic frameworks of different flavors
Decompositional Semantics for Events, Participants, and Scripts in Text
This thesis presents a sequence of practical and conceptual developments in decompositional meaning representations for events, participants, and scripts in text under the framework of Universal Decompositional Semantics (UDS) (White et al., 2016a). Part I of the thesis focuses on the semantic representation of individual events and their participants. Chapter 3 examines the feasibility of deriving semantic representations of events from dependency syntax; we demonstrate that predicate- argument structure may be extracted from syntax, but other desirable semantic attributes are not directly discernible. Accordingly, we present in Chapters 4 and 5 state of the art models for predicting these semantic attributes from text. Chapter 4 presents a model for predicting semantic proto-role labels (SPRL), attributes of participants in events based on Dowty’s seminal theory of thematic proto-roles (Dowty, 1991). In Chapter 5 we present a model of event factuality prediction (EFP), the task of determining whether an event mentioned in text happened (according to the meaning of the text). Both chapters include extensive experiments on multi-task learning for improving performance on each semantic prediction task. Taken together, Chapters 3, 4, and 5 represent the development of individual components of a UDS parsing pipeline.
In Part II of the thesis, we shift to modeling sequences of events, or scripts (Schank and Abelson, 1977). Chapter 7 presents a case study in script induction using a collection of restaurant narratives from an online blog to learn the canonical “Restaurant Script.” In Chapter 8, we introduce a simple discriminative neural model for script induction based on narrative chains (Chambers and Jurafsky, 2008) that outperforms prior methods. Because much existing work on narrative chains employs semantically impoverished representations of events, Chapter 9 draws on the contributions of Part I to learn narrative chains with semantically rich, decompositional event representations. Finally, in Chapter 10, we observe that corpus based approaches to script induction resemble the task of language modeling. We explore the broader question of the relationship between language modeling and acquisition of common-sense knowledge, and introduce an approach that combines language modeling and light human supervision to construct datasets for common-sense inference