22,520 research outputs found
Universal Discourse Representation Structure Parsing
We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The many-to-one approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the one-to-many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages
The First Shared Task on Discourse Representation Structure Parsing
The paper presents the IWCS 2019 shared task on semantic parsing where the
goal is to produce Discourse Representation Structures (DRSs) for English
sentences. DRSs originate from Discourse Representation Theory and represent
scoped meaning representations that capture the semantics of negation, modals,
quantification, and presupposition triggers. Additionally, concepts and
event-participants in DRSs are described with WordNet synsets and the thematic
roles from VerbNet. To measure similarity between two DRSs, they are
represented in a clausal form, i.e. as a set of tuples. Participant systems
were expected to produce DRSs in this clausal form. Taking into account the
rich lexical information, explicit scope marking, a high number of shared
variables among clauses, and highly-constrained format of valid DRSs, all these
makes the DRS parsing a challenging NLP task. The results of the shared task
displayed improvements over the existing state-of-the-art parser.Comment: International Conference on Computational Semantics (IWCS
DRS at MRP 2020:Dressing up Discourse Representation Structures as Graphs
Discourse Representation Theory (DRT) is a formal account for representing
the meaning of natural language discourse. Meaning in DRT is modeled via a
Discourse Representation Structure (DRS), a meaning representation with a
model-theoretic interpretation, which is usually depicted as nested boxes. In
contrast, a directed labeled graph is a common data structure used to encode
semantics of natural language texts. The paper describes the procedure of
dressing up DRSs as directed labeled graphs to include DRT as a new framework
in the 2020 shared task on Cross-Framework and Cross-Lingual Meaning
Representation Parsing. Since one of the goals of the shared task is to
encourage unified models for several semantic graph frameworks, the conversion
procedure was biased towards making the DRT graph framework somewhat similar to
other graph-based meaning representation frameworks.Comment: 10 pages, 4 figures, 4 tables, CoNLL 2020 Shared Tas
A Deep Sequential Model for Discourse Parsing on Multi-Party Dialogues
Discourse structures are beneficial for various NLP tasks such as dialogue
understanding, question answering, sentiment analysis, and so on. This paper
presents a deep sequential model for parsing discourse dependency structures of
multi-party dialogues. The proposed model aims to construct a discourse
dependency tree by predicting dependency relations and constructing the
discourse structure jointly and alternately. It makes a sequential scan of the
Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model
decides to which previous EDU the current one should link and what the
corresponding relation type is. The predicted link and relation type are then
used to build the discourse structure incrementally with a structured encoder.
During link prediction and relation classification, the model utilizes not only
local information that represents the concerned EDUs, but also global
information that encodes the EDU sequence and the discourse structure that is
already built at the current step. Experiments show that the proposed model
outperforms all the state-of-the-art baselines.Comment: Accepted to AAAI 201
Neural Discourse Structure for Text Categorization
We show that discourse structure, as defined by Rhetorical Structure Theory
and provided by an existing discourse parser, benefits text categorization. Our
approach uses a recursive neural network and a newly proposed attention
mechanism to compute a representation of the text that focuses on salient
content, from the perspective of both RST and the task. Experiments consider
variants of the approach and illustrate its strengths and weaknesses.Comment: ACL 2017 camera ready versio
- …