154 research outputs found

    Evaluation of Automatic Text Summarization Using Synthetic Facts

    Get PDF
    Automatic text summarization has achieved remarkable success with the development of deep neural networks and the availability of standardized benchmark datasets. It can generate fluent, human-like summaries. However, the unreliability of the existing evaluation metrics hinders its practical usage and slows down its progress. To address this issue, we propose an automatic reference-less text summarization evaluation system with dynamically generated synthetic facts. We hypothesize that if a system guarantees a summary that has all the facts that are 100% known in the synthetic document, it can provide natural interpretability and high feasibility in measuring factual consistency and comprehensiveness. To our knowledge, our system is the first system that measures the overarching quality of the text summarization models with factual consistency, comprehensiveness, and compression rate. We validate our system by comparing its correlation with human judgment with existing N-gram overlap-based metrics such as ROUGE and BLEU and a BERT-based evaluation metric, BERTScore. Our system\u27s experimental evaluation of PEGASUS, BART, and T5 outperforms the current evaluation metrics in measuring factual consistency with a noticeable margin and demonstrates its statistical significance in measuring comprehensiveness and overall summary quality

    Bridging Cross-Modal Alignment for OCR-Free Content Retrieval in Scanned Historical Documents

    Get PDF
    In this work, we address the limitations of current approaches to document retrieval by incorporating vision-based topic extraction. While previous methods have primarily focused on visual elements or relied on optical character recognition (OCR) for text extraction, we propose a paradigm shift by directly incorporating vision into the topic space. We demonstrate that recognizing all visual elements within a document is unnecessary for identifying its underlying topic. Visual cues such as icons, writing style, and font can serve as sufficient indicators. By leveraging ranking loss functions and convolutional neural networks (CNNs), we learn complex topological representations that mimic the behavior of text representations. Our approach aims to eliminate the need for OCR and its associated challenges, including efficiency, performance, data-hunger, and expensive annotation. Furthermore, we highlight the significance of incorporating vision in historical documentation, where visually antiquated documents contain valuable cues. Our research contributes to the understanding of topic extraction from a vision perspective and offers insights into annotation-cheap document retrieval system

    Summarization from Multiple User Generated Videos in Geo-Space

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Genetic Clustering Algorithm for Automatic Text Summarization

    Get PDF
    Abstract. Automatic text summarization has become a relevant topic due to the information overload. This automatization aims to help humans and machines to deal with the vast amount of text data (structured and un-structured) offered on the web and deep web. In this research a novel approach for automatic extractive text summarization called SENCLUS is presented. Using a genetic clustering algorithm, SENCLUS clusters the sentences as close representation of the text topics using a fitness function based on redundancy and coverage, and applies a scoring function to select the most relevant sentences of each topic to be part of the extractive summary. The approach was validated using the DUC2002 data set and ROUGE summary quality measures. The results shows that the approach is representative against the state of the art methods for extractive automatic text summarization.La generación automática de resúmenes se ha posicionado como un tema de gran importancia debido a la sobrecarga informativa. El objetivo de esta tecnología es el ayudar humanos y maquinas a lidiar con el gran volumen de información en forma de texto (estructurada y no estructurada) que se encuentra en la red y en la red profunda. Esta investigación presenta un nuevo algoritmo para la generación automática de resúmenes extractivos llamado SENCLUS. Este algoritmo es capaz de detectar los temas presentes en un texto usando una técnica de agrupación genética para formar grupos de oraciones. Estos grupos de oraciones son una representación aproximada de los temas del texto y estos son formados usando una función aptitud basada en cobertura y redundancia. Una vez los grupos de oraciones son encontrados, se aplica una función puntuación para seleccionar las oraciones mas relevantes de cada tema hasta que las restricciones de longitud del resumen lo permitan. SENCLUS fue validado en una serie de experimentos en los cuales se usò el conjunto de datos DUC2002 para la generación de resúmenes de un solo documento y se usò la medida ROUGE para medir de forma automática la calidad de cada resumen. Los resultados mostraron que el enfoque propuesto es representativo al ser comparado con los algoritmos presentes en el estado del arte para la generación de resúmenes extractivos.Maestrí

    Editorial—A Survey of Research Questions for Intelligent Information Systems in Education

    Full text link
    Education is an application domain in which many research questions from Intelligent Information Systems may prove their worth. We discuss three themes in this editorial: distributed education and learner modeling, semantic analysis of text, and intelligent information management.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46463/1/10844_2004_Article_386186.pd
    corecore