136 research outputs found

    Query-Focused Abstractive Summarization using Neural Networks

    Get PDF
    Query-focused abstractive document summarization (QFADS) is a process of shortening a document into a summary while keeping the context of query in mind. We implemented a model consisting of a novel selective mechanism for QFADS. A selective mechanism was used for improving the representation of a long input (passage) sequence. We conducted experiments on the Debatepedia dataset, a recently developed dataset for query-focused abstractive summarization task, which showed that our model outperforms the state-of-the-art model in all ROUGE scores. Also, we proposed three models all of which consist of a coarse-to-fine approach and a novel selective mechanism for query-focused abstractive multi document summarization (QFAMDS). The coarse-to-fine approach was used to reduce the length of the passage input from multiple documents. We conducted experiments on the MS MARCO dataset, a recently developed large scale dataset by Microsoft for reading comprehension, and have reported our scores using various evaluation metrics.Natural Sciences and Engineering Research Council (NSERC) of Canada and the University of Lethbridg

    Unsupervised Semantic Representation Learning of Scientific Literature Based on Graph Attention Mechanism and Maximum Mutual Information

    Full text link
    Since most scientific literature data are unlabeled, this makes unsupervised graph-based semantic representation learning crucial. Therefore, an unsupervised semantic representation learning method of scientific literature based on graph attention mechanism and maximum mutual information (GAMMI) is proposed. By introducing a graph attention mechanism, the weighted summation of nearby node features make the weights of adjacent node features entirely depend on the node features. Depending on the features of the nearby nodes, different weights can be applied to each node in the graph. Therefore, the correlations between vertex features can be better integrated into the model. In addition, an unsupervised graph contrastive learning strategy is proposed to solve the problem of being unlabeled and scalable on large-scale graphs. By comparing the mutual information between the positive and negative local node representations on the latent space and the global graph representation, the graph neural network can capture both local and global information. Experimental results demonstrate competitive performance on various node classification benchmarks, achieving good results and sometimes even surpassing the performance of supervised learning

    A literature review of abstractive summarization methods

    Get PDF
    The paper contains a literature review for automatic abstractive text summarization. The classification of abstractive text summarization methods was considered. Since the emergence of text summarization in the 1950s, techniques for summaries generation were constantly improving, but because the abstractive summarization require extensive language processing, the greatest progress was achieved only recently. Due to the current fast pace of development of both Natural Language Processing in general and Text Summarization in particular, it is essential to analyze the progress in these areas. The paper aims to give a general perspective on both the state-of-the-art and older approaches, while explaining the methods and approaches. Additionally, evaluation results of the research papers are presented
    • …
    corecore