1 research outputs found
Prompting LLMs with content plans to enhance the summarization of scientific articles
This paper presents novel prompting techniques to improve the performance of
automatic summarization systems for scientific articles. Scientific article
summarization is highly challenging due to the length and complexity of these
documents. We conceive, implement, and evaluate prompting techniques that
provide additional contextual information to guide summarization systems.
Specifically, we feed summarizers with lists of key terms extracted from
articles, such as author keywords or automatically generated keywords. Our
techniques are tested with various summarization models and input texts.
Results show performance gains, especially for smaller models summarizing
sections separately. This evidences that prompting is a promising approach to
overcoming the limitations of less powerful systems. Our findings introduce a
new research direction of using prompts to aid smaller models.Comment: 15 pages, 4 figure