1,814 research outputs found

    A Survey on Event-based News Narrative Extraction

    Full text link
    Narratives are fundamental to our understanding of the world, providing us with a natural structure for knowledge representation over time. Computational narrative extraction is a subfield of artificial intelligence that makes heavy use of information retrieval and natural language processing techniques. Despite the importance of computational narrative extraction, relatively little scholarly work exists on synthesizing previous research and strategizing future research in the area. In particular, this article focuses on extracting news narratives from an event-centric perspective. Extracting narratives from news data has multiple applications in understanding the evolving information landscape. This survey presents an extensive study of research in the area of event-based news narrative extraction. In particular, we screened over 900 articles that yielded 54 relevant articles. These articles are synthesized and organized by representation model, extraction criteria, and evaluation approaches. Based on the reviewed studies, we identify recent trends, open challenges, and potential research lines.Comment: 37 pages, 3 figures, to be published in the journal ACM CSU

    Shaping Political Discourse using multi-source News Summarization

    Full text link
    Multi-document summarization is the process of automatically generating a concise summary of multiple documents related to the same topic. This summary can help users quickly understand the key information from a large collection of documents. Multi-document summarization systems are more complex than single-document summarization systems due to the need to identify and combine information from multiple sources. In this paper, we have developed a machine learning model that generates a concise summary of a topic from multiple news documents. The model is designed to be unbiased by sampling its input equally from all the different aspects of the topic, even if the majority of the news sources lean one way

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    Survey on Multi-Document Summarization: Systematic Literature Review

    Full text link
    In this era of information technology, abundant information is available on the internet in the form of web pages and documents on any given topic. Finding the most relevant and informative content out of these huge number of documents, without spending several hours of reading has become a very challenging task. Various methods of multi-document summarization have been developed to overcome this problem. The multi-document summarization methods try to produce high-quality summaries of documents with low redundancy. This study conducts a systematic literature review of existing methods for multi-document summarization methods and provides an in-depth analysis of performance achieved by these methods. The findings of the study show that more effective methods are still required for getting higher accuracy of these methods. The study also identifies some open challenges that can gain the attention of future researchers of this domain

    Systematic literature review (SLR) automation: a systematic literature review

    Get PDF
    Context: A systematic literature review(SLR) is a methodology used to find and aggregate all relevant studies about a specific research question or topic of interest. Most of the SLR processes are manually conducted. Automating these processes can reduce the workload and time consumed by human. Method: we use SLR as a methodology to survey the literature about the technologies used to automate SLR processes. Result: from the collected data we found many work done to automate the study selection process but there is no evidence about automation of the planning and reporting process. Most of the authors use machine learning classifiers to automate the study selection process. From our survey, there are processes that are similar to the SLR process for which there are automatic techniques to perform them. Conclusion: Because of these results, we concluded that there should be more research done on the planning, reporting, data extraction and synthesizing processes of SLR

    Automatic Summarization

    Get PDF
    It has now been 50 years since the publication of Luhn’s seminal paper on automatic summarization. During these years the practical need for automatic summarization has become increasingly urgent and numerous papers have been published on the topic. As a result, it has become harder to find a single reference that gives an overview of past efforts or a complete view of summarization tasks and necessary system components. This article attempts to fill this void by providing a comprehensive overview of research in summarization, including the more traditional efforts in sentence extraction as well as the most novel recent approaches for determining important content, for domain and genre specific summarization and for evaluation of summarization. We also discuss the challenges that remain open, in particular the need for language generation and deeper semantic understanding of language that would be necessary for future advances in the field
    corecore