3,446 research outputs found
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines.Comment: Proceedings of EMNLP 201
Topic-Centric Unsupervised Multi-Document Summarization of Scientific and News Articles
Recent advances in natural language processing have enabled automation of a
wide range of tasks, including machine translation, named entity recognition,
and sentiment analysis. Automated summarization of documents, or groups of
documents, however, has remained elusive, with many efforts limited to
extraction of keywords, key phrases, or key sentences. Accurate abstractive
summarization has yet to be achieved due to the inherent difficulty of the
problem, and limited availability of training data. In this paper, we propose a
topic-centric unsupervised multi-document summarization framework to generate
extractive and abstractive summaries for groups of scientific articles across
20 Fields of Study (FoS) in Microsoft Academic Graph (MAG) and news articles
from DUC-2004 Task 2. The proposed algorithm generates an abstractive summary
by developing salient language unit selection and text generation techniques.
Our approach matches the state-of-the-art when evaluated on automated
extractive evaluation metrics and performs better for abstractive summarization
on five human evaluation metrics (entailment, coherence, conciseness,
readability, and grammar). We achieve a kappa score of 0.68 between two
co-author linguists who evaluated our results. We plan to publicly share
MAG-20, a human-validated gold standard dataset of topic-clustered research
articles and their summaries to promote research in abstractive summarization.Comment: 6 pages, 6 Figures, 8 Tables. Accepted at IEEE Big Data 2020
(https://bigdataieee.org/BigData2020/AcceptedPapers.html
Effective Use of Word Order for Text Categorization with Convolutional Neural Networks
Convolutional neural network (CNN) is a neural network that can make use of
the internal structure of data such as the 2D structure of image data. This
paper studies CNN on text categorization to exploit the 1D structure (namely,
word order) of text data for accurate prediction. Instead of using
low-dimensional word vectors as input as is often done, we directly apply CNN
to high-dimensional text data, which leads to directly learning embedding of
small text regions for use in classification. In addition to a straightforward
adaptation of CNN from image to text, a simple but new variation which employs
bag-of-word conversion in the convolution layer is proposed. An extension to
combine multiple convolution layers is also explored for higher accuracy. The
experiments demonstrate the effectiveness of our approach in comparison with
state-of-the-art methods
- …