41 research outputs found

    Final report of Task #5: Current document index system for document retrieval investigation

    Full text link
    In Part I of this report, we describe the work completed during the last fiscal year (October 1, 2002 thru September 30, 2003). The single biggest challenge this past year has been to develop and deliver a new software technology to classify Homeland Security Sensitive documents with high precision. Not only was a satisfactory system developed, an operational version was delivered to CACI in April 2003. The delivered system is called the Homeland Security Classifier (HSC). In Part II we give an overview of the projects ISRI has completed during the first four years of this cooperative agreement (October 1, 1998 thru September 30, 2002). Each of the deliverables associated with these projects has been thoroughly described in previous reports

    Autotag: A tool for creating structured document collections from printed materials

    Full text link
    Today\u27s optical character recognition (OCR) devices ordinarily are not capable of delimiting or marking up specific structural information about the document such as the title, its authors, and titles of sections. Such information appears in the OCR device output, but would require a human to go through the output to locate the information. This type of information is highly useful for information retrieval (IR), allowing users much more flexibility in making queries of a retrieval system. This thesis will describe the design, implementation, and evaluation of a software system called Autotag. This system will automatically markup structural information in OCR-generated text. It will also establish a mapping between objects in page images and their corresponding ASCII representation. This mapping can then be used to design flexible image-based interfaces for information retrieval related applications

    Effect of OCR errors on short documents

    Full text link
    Presented in this thesis is a study of the effect of OCR errors on short documents. OCR recognizes and translates text image into ASCII format. When this data is retrieved in response to a query, the retrieval performance depends on the efficiency of the OCR device used. Measures like recall, precision and ranking were used to gauge the retrieval performance. The information retrieval system that was used is SMART, based on the vector space model. On evaluating these measures, it has been concluded that average precision and recall are not affected significantly when the OCR collection is compared to its corrected version. However, it was also concluded that with more complex weighting schemes, the relevant document rankings became more divergent. Also, the effect of an automatic post-processing system on the retrieval performance was studied

    Feature recognition in OCR text

    Full text link
    This thesis investigates the recognition and extraction of special word sequences, representing concepts, from OCR text. Unlike general index terms, concepts can consist of one or more terms that combined, have higher retrieval value than the terms alone (i.e. acronyms, proper nouns, phrases). An algorithm to recognize acronyms and their definitions will be presented. An evaluation of the algorithm will also be presented

    Thesaurus-aided learning for rule-based categorization of Ocr texts

    Full text link
    The question posed in this thesis is whether the effectiveness of the rule-based approach to automatic text categorization on OCR collections can be improved by using domain-specific thesauri. A rule-based categorizer was constructed consisting of a C++ program called C-KANT which consults documents and creates a program which can be executed by the CLIPS expert system shell. A series of tests using domain-specific thesauri revealed that a query expansion approach to rule-based automatic text categorization using domain-dependent thesauri will not improve the categorization of OCR texts. Although some improvement to categorization could be made using rules over a mixture of thesauri, the improvements were not significantly large

    OCRspell: An interactive spelling correction system for OCR errors in text

    Full text link
    In this thesis we describe a spelling correction system designed specifically for OCR (Optical Character Recognition) generated text that selects candidate words through the use of information gathered from multiple knowledge sources. This system for text correction is based on static and dynamic device mappings, approximate string matching, and n-gram analysis. Our statistically based, Bayesian system incorporates a learning feature that collects confusion information at the collection and document levels. An evaluation of the new system is presented as well

    Retrieval effectiveness for OCR text using thesauri

    Full text link
    This thesis reports on the effects of an automatic query expansion with a subject specific thesaurus on retrieval effectiveness for document collection consisting of OCR text; The investigation encompasses several experiments with a modern retrieval engine based on the probabilistic model. Each experiment is performed on two document collections. The first version of the collection consists of raw OCR output. The second collection consists of the ground truth (retyped from hard copy) version of the same collection; It is shown that the usage of the thesaurus as a source for query expansion can significantly improve recall for Boolean queries, for both OCR and manually corrected document collections. In the case of weighted queries, the expansion has no effect on the average precision and recall. Nevertheless, some individual queries benefit from query expansion

    A Mask-Based Enhancement Method for Historical Documents

    Get PDF
    This paper proposes a novel method for document enhancement. The method is based on the combination of two state-of-the-art filters through the construction of a mask. The mask is applied to a TV (Total Variation) -regularized image where background noise has been reduced. The masked image is then filtered by NLmeans (Non-Local Means) which reduces the noise in the text areas located by the mask. The document images to be enhanced are real historical documents from several periods which include several defects in their background. These defects result from scanning, paper aging and bleed-through. We observe the improvement of this enhancement method through OCR accuracy

    Pre-Processing of Degraded Printed Documents by Non-Local Means and Total Variation

    Get PDF
    We compare in this study two image restoration approaches for the pre-processing of printed documents: namely the Non-local Means filter and a total variation minimization approach. We apply these two ap- proaches to printed document sets from various periods, and we evaluate their effectiveness through character recognition performance using an open source OCR. Our results show that for each document set, one or both pre-processing methods improve character recog- nition accuracy over recognition without preprocessing. Higher accuracies are obtained with Non-local Means when characters have a low level of degradation since they can be restored by similar neighboring parts of non-degraded characters. The Total Variation approach is more effective when characters are highly degraded and can only be restored through modeling instead of using neighboring data

    Image-based interactive tools for entering ground truth data

    Full text link
    We report on the design and implementation of an interactive system for capturing the logical description of textual documents. This system is equipped with a graphical user interface which guides the user in defining components of the logical structure of the document while viewing document page images. The system is expected to enable faster and more accurate ground truth document data collection
    corecore