17 research outputs found

    Controlling Output Length in Neural Encoder-Decoders

    Full text link
    Neural encoder-decoder models have shown great success in many sequence generation tasks. However, previous work has not investigated situations in which we would like to control the length of encoder-decoder outputs. This capability is crucial for applications such as text summarization, in which we have to generate concise summaries with a desired length. In this paper, we propose methods for controlling the output sequence length for neural encoder-decoder models: two decoding-based methods and two learning-based methods. Results show that our learning-based methods have the capability to control length without degrading summary quality in a summarization task.Comment: 11 pages. To appear in EMNLP 201

    A literature review of abstractive summarization methods

    Get PDF
    The paper contains a literature review for automatic abstractive text summarization. The classification of abstractive text summarization methods was considered. Since the emergence of text summarization in the 1950s, techniques for summaries generation were constantly improving, but because the abstractive summarization require extensive language processing, the greatest progress was achieved only recently. Due to the current fast pace of development of both Natural Language Processing in general and Text Summarization in particular, it is essential to analyze the progress in these areas. The paper aims to give a general perspective on both the state-of-the-art and older approaches, while explaining the methods and approaches. Additionally, evaluation results of the research papers are presented

    Windowing Models for Abstractive Summarization of Long Texts

    Full text link
    Neural summarization models suffer from the fixed-size input limitation: if text length surpasses the model's maximal number of input tokens, some document content (possibly summary-relevant) gets truncated Independently summarizing windows of maximal input size disallows for information flow between windows and leads to incoherent summaries. We propose windowing models for neural abstractive summarization of (arbitrarily) long texts. We extend the sequence-to-sequence model augmented with pointer generator network by (1) allowing the encoder to slide over different windows of the input document and (2) sharing the decoder and retaining its state across different input windows. We explore two windowing variants: Static Windowing precomputes the number of tokens the decoder should generate from each window (based on training corpus statistics); in Dynamic Windowing the decoder learns to emit a token that signals encoder's shift to the next input window. Empirical results render our models effective in their intended use-case: summarizing long texts with relevant content not bound to the very document beginning
    corecore