183 research outputs found
Generating Abstractive Summaries from Meeting Transcripts
Summaries of meetings are very important as they convey the essential content
of discussions in a concise form. Generally, it is time consuming to read and
understand the whole documents. Therefore, summaries play an important role as
the readers are interested in only the important context of discussions. In
this work, we address the task of meeting document summarization. Automatic
summarization systems on meeting conversations developed so far have been
primarily extractive, resulting in unacceptable summaries that are hard to
read. The extracted utterances contain disfluencies that affect the quality of
the extractive summaries. To make summaries much more readable, we propose an
approach to generating abstractive summaries by fusing important content from
several utterances. We first separate meeting transcripts into various topic
segments, and then identify the important utterances in each segment using a
supervised learning approach. The important utterances are then combined
together to generate a one-sentence summary. In the text generation step, the
dependency parses of the utterances in each segment are combined together to
create a directed graph. The most informative and well-formed sub-graph
obtained by integer linear programming (ILP) is selected to generate a
one-sentence summary for each topic segment. The ILP formulation reduces
disfluencies by leveraging grammatical relations that are more prominent in
non-conversational style of text, and therefore generates summaries that is
comparable to human-written abstractive summaries. Experimental results show
that our method can generate more informative summaries than the baselines. In
addition, readability assessments by human judges as well as log-likelihood
estimates obtained from the dependency parser show that our generated summaries
are significantly readable and well-formed.Comment: 10 pages, Proceedings of the 2015 ACM Symposium on Document
Engineering, DocEng' 201
Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding
Abstractive community detection is an important spoken language understanding
task, whose goal is to group utterances in a conversation according to whether
they can be jointly summarized by a common abstractive sentence. This paper
provides a novel approach to this task. We first introduce a neural contextual
utterance encoder featuring three types of self-attention mechanisms. We then
train it using the siamese and triplet energy-based meta-architectures.
Experiments on the AMI corpus show that our system outperforms multiple
energy-based and non-energy based baselines from the state-of-the-art. Code and
data are publicly available.Comment: Update baseline
COMPENDIUM: a text summarisation tool for generating summaries of multiple purposes, domains, and genres
In this paper, we present a Text Summarisation tool, compendium, capable of generating the most common types of summaries. Regarding the input, single- and multi-document summaries can be produced; as the output, the summaries can be extractive or abstractive-oriented; and finally, concerning their purpose, the summaries can be generic, query-focused, or sentiment-based. The proposed architecture for compendium is divided in various stages, making a distinction between core and additional stages. The former constitute the backbone of the tool and are common for the generation of any type of summary, whereas the latter are used for enhancing the capabilities of the tool. The main contributions of compendium with respect to the state-of-the-art summarisation systems are that (i) it specifically deals with the problem of redundancy, by means of textual entailment; (ii) it combines statistical and cognitive-based techniques for determining relevant content; and (iii) it proposes an abstractive-oriented approach for facing the challenge of abstractive summarisation. The evaluation performed in different domains and textual genres, comprising traditional texts, as well as texts extracted from the Web 2.0, shows that compendium is very competitive and appropriate to be used as a tool for generating summaries.This research has been supported by the project “Desarrollo de TĂ©cnicas Inteligentes e Interactivas de MinerĂa de Textos” (PROMETEO/2009/119) and the project reference ACOMP/2011/001 from the Valencian Government, as well as by the Spanish Government (grant no. TIN2009-13391-C04-01)
Topic-Centric Unsupervised Multi-Document Summarization of Scientific and News Articles
Recent advances in natural language processing have enabled automation of a
wide range of tasks, including machine translation, named entity recognition,
and sentiment analysis. Automated summarization of documents, or groups of
documents, however, has remained elusive, with many efforts limited to
extraction of keywords, key phrases, or key sentences. Accurate abstractive
summarization has yet to be achieved due to the inherent difficulty of the
problem, and limited availability of training data. In this paper, we propose a
topic-centric unsupervised multi-document summarization framework to generate
extractive and abstractive summaries for groups of scientific articles across
20 Fields of Study (FoS) in Microsoft Academic Graph (MAG) and news articles
from DUC-2004 Task 2. The proposed algorithm generates an abstractive summary
by developing salient language unit selection and text generation techniques.
Our approach matches the state-of-the-art when evaluated on automated
extractive evaluation metrics and performs better for abstractive summarization
on five human evaluation metrics (entailment, coherence, conciseness,
readability, and grammar). We achieve a kappa score of 0.68 between two
co-author linguists who evaluated our results. We plan to publicly share
MAG-20, a human-validated gold standard dataset of topic-clustered research
articles and their summaries to promote research in abstractive summarization.Comment: 6 pages, 6 Figures, 8 Tables. Accepted at IEEE Big Data 2020
(https://bigdataieee.org/BigData2020/AcceptedPapers.html
- …