2 research outputs found
Abstractive Summarization with Efficient Transformer Based Approach
One of the most significant research areas is how to make a document smaller while keeping its essential information because of the rapid proliferation of online data. This information must be summarized in order to recover meaningful knowledge in an acceptable time. Text summarization is what it's called. Extractive and abstractive text summarization are the two types of summarization. In current years, the arena of abstractive text summarization has become increasingly popular. Abstractive Text Summarization (ATS) aims to extract the most vital content from a text corpus and condense it into a shorter text while maintaining its meaning and semantic and grammatical accuracy. Deep learning architectures have entered a new phase in natural language processing (NLP). Many studies have demonstrated the competitive performance of innovative architectures including recurrent neural network (RNN), Attention Mechanism and LSTM among others. Transformer, a recently presented model, relies on the attention process. In this paper, abstractive text summarization is accomplished using a basic Transformer model, a Transformer with a pointer generation network (PGN) and coverage mechanism, a Fastformer architecture and Fastformer with pointer generation network (PGN) and coverage mechanism. We compare these architectures after careful and thorough hyperparameter adjustment. In the experiment the standard CNN/DM dataset is used to test these architectures on the job of abstractive summarization
Generation of Highlights from Research Papers Using Pointer-Generator Networks and SciBERT Embeddings
Nowadays many research articles are prefaced with research highlights to
summarize the main findings of the paper. Highlights not only help researchers
precisely and quickly identify the contributions of a paper, they also enhance
the discoverability of the article via search engines. We aim to automatically
construct research highlights given certain segments of the research paper. We
use a pointer-generator network with coverage mechanism and a contextual
embedding layer at the input that encodes the input tokens into SciBERT
embeddings. We test our model on a benchmark dataset, CSPubSum and also present
MixSub, a new multi-disciplinary corpus of papers for automatic research
highlight generation. For both CSPubSum and MixSub, we have observed that the
proposed model achieves the best performance compared to related variants and
other models proposed in the literature. On the CSPubSum data set, our model
achieves the best performance when the input is only the abstract of a paper as
opposed to other segments of the paper. It produces ROUGE-1, ROUGE-2 and
ROUGE-L F1-scores of 38.26, 14.26 and 35.51, respectively, METEOR F1-score of
32.62, and BERTScore F1 of 86.65 which outperform all other baselines. On the
new MixSub data set, where only the abstract is the input, our proposed model
(when trained on the whole training corpus without distinguishing between the
subject categories) achieves ROUGE-1, ROUGE-2 and ROUGE-L F1-scores of 31.78,
9.76 and 29.3, respectively, METEOR F1-score of 24.00, and BERTScore F1 of
85.25, outperforming other models.Comment: 18 pages, 7 figures, 7 table