72 research outputs found
Key Phrase Extraction of Lightly Filtered Broadcast News
This paper explores the impact of light filtering on automatic key phrase
extraction (AKE) applied to Broadcast News (BN). Key phrases are words and
expressions that best characterize the content of a document. Key phrases are
often used to index the document or as features in further processing. This
makes improvements in AKE accuracy particularly important. We hypothesized that
filtering out marginally relevant sentences from a document would improve AKE
accuracy. Our experiments confirmed this hypothesis. Elimination of as little
as 10% of the document sentences lead to a 2% improvement in AKE precision and
recall. AKE is built over MAUI toolkit that follows a supervised learning
approach. We trained and tested our AKE method on a gold standard made of 8 BN
programs containing 110 manually annotated news stories. The experiments were
conducted within a Multimedia Monitoring Solution (MMS) system for TV and radio
news/programs, running daily, and monitoring 12 TV and 4 radio channels.Comment: In 15th International Conference on Text, Speech and Dialogue (TSD
2012
A Multilingual Study of Multi-Sentence Compression using Word Vertex-Labeled Graphs and Integer Linear Programming
Multi-Sentence Compression (MSC) aims to generate a short sentence with the
key information from a cluster of similar sentences. MSC enables summarization
and question-answering systems to generate outputs combining fully formed
sentences from one or several documents. This paper describes an Integer Linear
Programming method for MSC using a vertex-labeled graph to select different
keywords, with the goal of generating more informative sentences while
maintaining their grammaticality. Our system is of good quality and outperforms
the state of the art for evaluations led on news datasets in three languages:
French, Portuguese and Spanish. We led both automatic and manual evaluations to
determine the informativeness and the grammaticality of compressions for each
dataset. In additional tests, which take advantage of the fact that the length
of compressions can be modulated, we still improve ROUGE scores with shorter
output sentences.Comment: Preprint versio
Topic-Centric Unsupervised Multi-Document Summarization of Scientific and News Articles
Recent advances in natural language processing have enabled automation of a
wide range of tasks, including machine translation, named entity recognition,
and sentiment analysis. Automated summarization of documents, or groups of
documents, however, has remained elusive, with many efforts limited to
extraction of keywords, key phrases, or key sentences. Accurate abstractive
summarization has yet to be achieved due to the inherent difficulty of the
problem, and limited availability of training data. In this paper, we propose a
topic-centric unsupervised multi-document summarization framework to generate
extractive and abstractive summaries for groups of scientific articles across
20 Fields of Study (FoS) in Microsoft Academic Graph (MAG) and news articles
from DUC-2004 Task 2. The proposed algorithm generates an abstractive summary
by developing salient language unit selection and text generation techniques.
Our approach matches the state-of-the-art when evaluated on automated
extractive evaluation metrics and performs better for abstractive summarization
on five human evaluation metrics (entailment, coherence, conciseness,
readability, and grammar). We achieve a kappa score of 0.68 between two
co-author linguists who evaluated our results. We plan to publicly share
MAG-20, a human-validated gold standard dataset of topic-clustered research
articles and their summaries to promote research in abstractive summarization.Comment: 6 pages, 6 Figures, 8 Tables. Accepted at IEEE Big Data 2020
(https://bigdataieee.org/BigData2020/AcceptedPapers.html
Toward abstractive multi-document summarization using submodular function-based framework, sentence compression and merging
Automatic multi-document summarization is a process of generating a summary that contains the most important information from multiple documents. In this thesis, we design an automatic multi-document summarization system using different abstraction-based methods and submodularity. Our proposed model considers summarization as a budgeted submodular function maximization problem. The model integrates three important measures of a summary - namely importance, coverage, and non-redundancy, and we design a submodular function for each of them. In addition, we integrate sentence compression and sentence merging. When evaluated on the DUC 2004 data set, our generic summarizer has outperformed the state-of-the-art summarization systems in terms of ROUGE-1 recall and f1-measure. For query-focused summarization, we used the DUC 2007 data set where our system achieves statistically similar results to several well-established methods in terms of the ROUGE-2 measure
Sentence Compressor
Nowadays, internet becomes the main source of information. Most people rely on the internet to find the information for research and assignments. People will try to find the right articles, journals, or web pages that are related to their task. In order to choose the right materials, they have to go through every articles, journals, and web pages to find the important points. However, it is very time-consuming to find go through every long articles. This information explosion as led to a constant state of information overload problem. As the solution, a desktop application named Sentence Compressor is developed to compress the long articles. This project aims to develop a desktop application that shortens the length of the long sentences without changing the original meaning. Integer Linear Programming (ILP) techniques is used to solve the sentence compression problem. Bilingual Evaluation Understudy (BLEU) is used to measure the quality of the produced output. Five articles were randomly selected for the experiment. The BLEU score for the articles compressed by Sentence Compressor and articles compressed by human is compared. The system performance evaluation is also done to measure the usefulness of this application. More than 65% of the respondents agreed that Sentence Compressor is useful in information searching
An Efficient Approach for Multi-Sentence Compression
Abstract Multi Sentence Compression (MSC) is of great value to many real world applications, such as guided microblog summarization, opinion summarization and newswire summarization. Recently, word graph-based approaches have been proposed and become popular in MSC. Their key assumption is that redundancy among a set of related sentences provides a reliable way to generate informative and grammatical sentences. In this paper, we propose an effective approach to enhance the word graph-based MSC and tackle the issue that most of the state-of-the-art MSC approaches are confronted with: i.e., improving both informativity and grammaticality at the same time. Our approach consists of three main components: (1) a merging method based on Multiword Expressions (MWE); (2) a mapping strategy based on synonymy between words; (3) a re-ranking step to identify the best compression candidates generated using a POS-based language model (POS-LM). We demonstrate the effectiveness of this novel approach using a dataset made of clusters of English newswire sentences. The observed improvements on informativity and grammaticality of the generated compressions show an up to 44% error reduction over state-of-the-art MSC systems
- …