72 research outputs found
What you say and how you say it : joint modeling of topics and discourse in microblog conversations
This paper presents an unsupervised framework for jointly modeling topic content and discourse behavior in microblog conversations. Concretely, we propose a neural model to discover word clusters indicating what a conversation concerns (i.e., topics) and those reflecting how participants voice their opinions (i.e., discourse).1 Extensive experiments show that our model can yield both coherent topics and meaningful discourse behavior. Further study shows that our topic and discourse representations can benefit the classification of microblog messages, especially when they are jointly trained with the classifier
Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge
This paper provides a comprehensive analysis of the first shared task on
End-to-End Natural Language Generation (NLG) and identifies avenues for future
research based on the results. This shared task aimed to assess whether recent
end-to-end NLG systems can generate more complex output by learning from
datasets containing higher lexical richness, syntactic complexity and diverse
discourse phenomena. Introducing novel automatic and human metrics, we compare
62 systems submitted by 17 institutions, covering a wide range of approaches,
including machine learning architectures -- with the majority implementing
sequence-to-sequence models (seq2seq) -- as well as systems based on
grammatical rules and templates. Seq2seq-based systems have demonstrated a
great potential for NLG in the challenge. We find that seq2seq systems
generally score high in terms of word-overlap metrics and human evaluations of
naturalness -- with the winning SLUG system (Juraska et al., 2018) being
seq2seq-based. However, vanilla seq2seq models often fail to correctly express
a given meaning representation if they lack a strong semantic control mechanism
applied during decoding. Moreover, seq2seq models can be outperformed by
hand-engineered systems in terms of overall quality, as well as complexity,
length and diversity of outputs. This research has influenced, inspired and
motivated a number of recent studies outwith the original competition, which we
also summarise as part of this paper.Comment: Computer Speech and Language, final accepted manuscript (in press
DART: A Lightweight Quality-Suggestive Data-to-Text Annotation Tool
We present a lightweight annotation tool, the Data AnnotatoR Tool (DART), for
the general task of labeling structured data with textual descriptions. The
tool is implemented as an interactive application that reduces human efforts in
annotating large quantities of structured data, e.g. in the format of a table
or tree structure. By using a backend sequence-to-sequence model, our system
iteratively analyzes the annotated labels in order to better sample unlabeled
data. In a simulation experiment performed on annotating large quantities of
structured data, DART has been shown to reduce the total number of annotations
needed with active learning and automatically suggesting relevant labels.Comment: Accepted to COLING 2020 (selected as outstanding paper
- …