26 research outputs found
Joint Modeling of Content and Discourse Relations in Dialogues
We present a joint modeling approach to identify salient discussion points in
spoken meetings as well as to label the discourse relations between speaker
turns. A variation of our model is also discussed when discourse relations are
treated as latent variables. Experimental results on two popular meeting
corpora show that our joint model can outperform state-of-the-art approaches
for both phrase-based content selection and discourse relation prediction
tasks. We also evaluate our model on predicting the consistency among team
members' understanding of their group decisions. Classifiers trained with
features constructed from our model achieve significant better predictive
performance than the state-of-the-art.Comment: Accepted by ACL 2017. 11 page
SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
This paper introduces the SAMSum Corpus, a new dataset with abstractive
dialogue summaries. We investigate the challenges it poses for automated
summarization by testing several models and comparing their results with those
obtained on a corpus of news articles. We show that model-generated summaries
of dialogues achieve higher ROUGE scores than the model-generated summaries of
news -- in contrast with human evaluators' judgement. This suggests that a
challenging task of abstractive dialogue summarization requires dedicated
models and non-standard quality measures. To our knowledge, our study is the
first attempt to introduce a high-quality chat-dialogues corpus, manually
annotated with abstractive summarizations, which can be used by the research
community for further studies.Comment: Attachment contains the described dataset archived in 7z format.
Please see the attached readme and licence. Update of the previous version:
changed formats of train/val/test files in corpus.7
Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders
Automatic chat summarization can help people quickly grasp important
information from numerous chat messages. Unlike conventional documents, chat
logs usually have fragmented and evolving topics. In addition, these logs
contain a quantity of elliptical and interrogative sentences, which make the
chat summarization highly context dependent. In this work, we propose a novel
unsupervised framework called RankAE to perform chat summarization without
employing manually labeled data. RankAE consists of a topic-oriented ranking
strategy that selects topic utterances according to centrality and diversity
simultaneously, as well as a denoising auto-encoder that is carefully designed
to generate succinct but context-informative summaries based on the selected
utterances. To evaluate the proposed method, we collect a large-scale dataset
of chat logs from a customer service environment and build an annotated set
only for model evaluation. Experimental results show that RankAE significantly
outperforms other unsupervised methods and is able to generate high-quality
summaries in terms of relevance and topic coverage.Comment: Accepted by AAAI 2021, 9 page
Abstractive Multi-Document Summarization via Phrase Selection and Merging
We propose an abstraction-based multi-document summarization framework that
can construct new sentences by exploring more fine-grained syntactic units than
sentences, namely, noun/verb phrases. Different from existing abstraction-based
approaches, our method first constructs a pool of concepts and facts
represented by phrases from the input documents. Then new sentences are
generated by selecting and merging informative phrases to maximize the salience
of phrases and meanwhile satisfy the sentence construction constraints. We
employ integer linear optimization for conducting phrase selection and merging
simultaneously in order to achieve the global optimal solution for a summary.
Experimental results on the benchmark data set TAC 2011 show that our framework
outperforms the state-of-the-art models under automated pyramid evaluation
metric, and achieves reasonably well results on manual linguistic quality
evaluation.Comment: 11 pages, 1 figure, accepted as a full paper at ACL 201
Are we summarizing the right way? : a survey of dialogue summarization data sets
Dialogue summarization is a long-standing task in the field of NLP, and several data sets with dialogues and associated human-written summaries of different styles exist. However, it is unclear for which type of dialogue which type of summary is most appropriate. For this reason, we apply a linguistic model of dialogue types to derive matching summary items and NLP tasks. This allows us to map existing dialogue summarization data sets into this model and identify gaps and potential directions for future work. As part of this process, we also provide an extensive overview of existing dialogue summarization data sets
Controllable Abstractive Dialogue Summarization with Sketch Supervision
In this paper, we aim to improve abstractive dialogue summarization quality and, at the same time, enable granularity control. Our model has two primary components and stages: 1) a two-stage generation strategy that generates a preliminary summary sketch serving as the basis for the final summary. This summary sketch provides a weakly supervised signal in the form of pseudo-labeled interrogative pronoun categories and key phrases extracted using a constituency parser. 2) A simple strategy to control the granularity of the final summary, in that our model can automatically determine or control the number of generated summary sentences for a given dialogue by predicting and highlighting different text spans from the source text. Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score. In addition, we conduct a case study and show competitive human evaluation results and controllability to human-annotated summaries
Sentence Embedding Approach using LSTM Auto-encoder for Discussion Threads Summarization
Online discussion forums are repositories of valuable information where users interact and articulate their ideas and opinions, and share experiences about numerous topics. These online discussion forums are internet-based online communities where users can ask for help and find the solution to a problem. A new user of online discussion forums becomes exhausted from reading the significant number of irrelevant replies in a discussion. An automated discussion thread summarizing system (DTS) is necessary to create a candid view of the entire discussion of a query. Most of the previous approaches for automated DTS use the continuous bag of words (CBOW) model as a sentence embedding tool, which is poor at capturing the overall meaning of the sentence and is unable to grasp word dependency. To overcome these limitations, we introduce the LSTM Auto-encoder as a sentence embedding technique to improve the performance of DTS. The empirical result in the context of the proposed approach’s average precision, recall, and F-measure with respect to ROGUE-1 and ROUGE-2 of two standard experimental datasets demonstrates the effectiveness and efficiency of the proposed approach and outperforms the state-of-the-art CBOW model in sentence embedding tasks and boost the performance of the automated DTS model