2 research outputs found

    Multi-document summarization based on document clustering and neural sentence fusion

    Get PDF
    In this thesis, we have approached a technique for tackling abstractive text summarization tasks with state-of-the-art results. We have proposed a novel method to improve multidocument summarization. The lack of large multi-document human-authored summaries needed to train seq2seq encoder-decoder models and the inaccuracy in representing multiple long documents into a fixed size vector inspired us to design complementary models for two different tasks such as sentence clustering and neural sentence fusion. In this thesis, we minimize the risk of producing incorrect fact by encoding a related set of sentences as an input to the encoder. We applied our complementary models to implement a full abstractive multi-document summarization system which simultaneously considers importance, coverage, and diversity under a desired length limit. We conduct extensive experiments for all the proposed models which bring significant improvements over the state-of-the-art methods across different evaluation metrics.Natural Sciences and Engineering Research Council (NSERC) of Canada and the University of Lethbridg

    Query-based summarization using reinforcement learning and transformer model

    Get PDF
    Query-based summarization problem is an interesting problem in the text summarization field. On the other hand, the reinforcement learning technique is popular for robotics and becoming accessible for the text summarization problem in the last couple of years (Narayan et al., 2018). The lack of significant works using reinforcement learning to solve the query-based summarization problem inspired us to use this technique. While doing so, We also introduce a different approach for sentence ranking and clustering to avoid redundancy in summaries. We propose an unsupervised extractive summarization method, which provides state-of-the-art results on some metrics. We develop two abstractive multi-document summarization models using the reinforcement learning technique and the transformer model (Vaswani et al., 2017). We consider the importance of information coverage and diversity under a fixed sentence limit for our summarization models. We have done several experiments for our proposed models, which bring significant results across different evaluation metrics
    corecore