11 research outputs found

    A tool for facilitating OCR postediting in historical documents

    Get PDF
    Optical character recognition (OCR) for historical documents is a complex procedure subject to a unique set of material issues, including inconsistencies in typefaces and low quality scanning. Consequently, even the most sophisticated OCR engines produce errors. This paper reports on a tool built for postediting the output of Tesseract, more specifically for correcting common errors in digitized historical documents. The proposed tool suggests alternatives for word forms not found in a specified vocabulary. The assumed error is replaced by a presumably correct alternative in the post-edition based on the scores of a Language Model (LM). The tool is tested on a chapter of the book An Essay Towards Regulating the Trade and Employing the Poor of this Kingdom. As demonstrated below, the tool is successful in correcting a number of common errors. If sometimes unreliable, it is also transparent and subject to human intervention

    Generating a training corpus for OCR post-correction using encoder-decoder model

    Get PDF
    International audienceIn this paper we present a novel approach to the automatic correction of OCR-induced orthographic errors in a given text. While current systems depend heavily on large training corpora or exter- nal information, such as domain-specific lexicons or confidence scores from the OCR process, our system only requires a small amount of relatively clean training data from a representative corpus to learn a character-based statistical language model using Bidirectional Long Short- Term Memory Networks (biLSTMs). We demonstrate the versatility and adaptability of our system on different text corpora with varying degrees of textual noise, in- cluding a real-life OCR corpus in the med- ical domain

    Image Enhancement for Scanned Historical Documents in the Presence of Multiple Degradations

    Get PDF
    Historical documents are treasured sources of information but typically suffer from problems with quality and degradation. Scanned images of historical documents suffer from difficulties due to paper quality and poor image capture, producing images with low contrast, smeared ink, bleed-through and uneven illumination. This PhD thesis proposes a novel adaptative histogram matching method to remove these artefacts from scanned images of historical documents. The adaptive histogram matching is modelled to create an ideal histogram by dividing the histogram using its Otsu level and applying Gaussian distributions to each segment with iterative output refinement applied to individual images. The pre-processing techniques of contrast stretching, wiener filtering, and bilateral filtering are used before the proposed adaptive histogram matching approach to maximise the dynamic range and reduce noise. The goal is to better represent document images and improve readability and the source images for Optical Character Recognition (OCR). Unlike other enhancement methods designed for single artefacts, the proposed method enhances multiple (low-contrast, smeared-ink, bleed-through and uneven illumination). In addition to developing an algorithm for historical document enhancement, the research also contributes a new dataset of scanned historical newspapers (an annotated subset of the Europeana Newspaper - ENP – dataset) where the enhancement technique is tested, which can also be used for further research. Experimental results show that the proposed method significantly reduces background noise and improves image quality on multiple artefacts compared to other enhancement methods. Several performance criteria are utilised to evaluate the proposed method’s efficiency. These include Signal to Noise Ratio (SNR), Mean opinion score (MOS), and visual document image quality assessment (VDIQA) metric called Visual Document Image Quality Assessment Metric (VDQAM). Additional assessment criteria to measure post-processing binarization quality are also discussed with enhanced results based on the Peak signal-to-noise ratio (PSNR), negative rate metric (NRM) and F-measure.Keywords: Image Enhancement, Historical Documents, OCR, Digitisation, Adaptive histogram matchin

    Advanced document data extraction techniques to improve supply chain performance

    Get PDF
    In this thesis, a novel machine learning technique to extract text-based information from scanned images has been developed. This information extraction is performed in the context of scanned invoices and bills used in financial transactions. These financial transactions contain a considerable amount of data that must be extracted, refined, and stored digitally before it can be used for analysis. Converting this data into a digital format is often a time-consuming process. Automation and data optimisation show promise as methods for reducing the time required and the cost of Supply Chain Management (SCM) processes, especially Supplier Invoice Management (SIM), Financial Supply Chain Management (FSCM) and Supply Chain procurement processes. This thesis uses a cross-disciplinary approach involving Computer Science and Operational Management to explore the benefit of automated invoice data extraction in business and its impact on SCM. The study adopts a multimethod approach based on empirical research, surveys, and interviews performed on selected companies.The expert system developed in this thesis focuses on two distinct areas of research: Text/Object Detection and Text Extraction. For Text/Object Detection, the Faster R-CNN model was analysed. While this model yields outstanding results in terms of object detection, it is limited by poor performance when image quality is low. The Generative Adversarial Network (GAN) model is proposed in response to this limitation. The GAN model is a generator network that is implemented with the help of the Faster R-CNN model and a discriminator that relies on PatchGAN. The output of the GAN model is text data with bonding boxes. For text extraction from the bounding box, a novel data extraction framework consisting of various processes including XML processing in case of existing OCR engine, bounding box pre-processing, text clean up, OCR error correction, spell check, type check, pattern-based matching, and finally, a learning mechanism for automatizing future data extraction was designed. Whichever fields the system can extract successfully are provided in key-value format.The efficiency of the proposed system was validated using existing datasets such as SROIE and VATI. Real-time data was validated using invoices that were collected by two companies that provide invoice automation services in various countries. Currently, these scanned invoices are sent to an OCR system such as OmniPage, Tesseract, or ABBYY FRE to extract text blocks and later, a rule-based engine is used to extract relevant data. While the system’s methodology is robust, the companies surveyed were not satisfied with its accuracy. Thus, they sought out new, optimized solutions. To confirm the results, the engines were used to return XML-based files with text and metadata identified. The output XML data was then fed into this new system for information extraction. This system uses the existing OCR engine and a novel, self-adaptive, learning-based OCR engine. This new engine is based on the GAN model for better text identification. Experiments were conducted on various invoice formats to further test and refine its extraction capabilities. For cost optimisation and the analysis of spend classification, additional data were provided by another company in London that holds expertise in reducing their clients' procurement costs. This data was fed into our system to get a deeper level of spend classification and categorisation. This helped the company to reduce its reliance on human effort and allowed for greater efficiency in comparison with the process of performing similar tasks manually using excel sheets and Business Intelligence (BI) tools.The intention behind the development of this novel methodology was twofold. First, to test and develop a novel solution that does not depend on any specific OCR technology. Second, to increase the information extraction accuracy factor over that of existing methodologies. Finally, it evaluates the real-world need for the system and the impact it would have on SCM. This newly developed method is generic and can extract text from any given invoice, making it a valuable tool for optimizing SCM. In addition, the system uses a template-matching approach to ensure the quality of the extracted information

    Capturing, Eliciting, and Prioritizing (CEP) Non-Functional Requirements Metadata during the Early Stages of Agile Software Development

    Get PDF
    Agile software engineering has been a popular methodology to develop software rapidly and efficiently. However, the Agile methodology often favors Functional Requirements (FRs) due to the nature of agile software development, and strongly neglects Non-Functional Requirements (NFRs). Neglecting NFRs has negative impacts on software products that have resulted in poor quality and higher cost to fix problems in later stages of software development. This research developed the CEP “Capture Elicit Prioritize” methodology to effectively gather NFRs metadata from software requirement artifacts such as documents and images. Artifact included the Optical Character Recognition (OCR) artifact which gathered metadata from images. The other artifacts included: Database Artifact, NFR Locator Plus, NFR Priority Artifact, and Visualization Artifact. The NFRs metadata gathered reduced false positives to include NFRs in the early stages of software requirements gathering along with FRs. Furthermore, NFRs were prioritized using existing FRs methodologies which are important to stakeholders as well as software engineers in delivering quality software. This research built on prior studies by specifically focusing on NFRs during the early stages of agile software development. Validation of the CEP methodology was accomplished by using the 26 requirements of the European Union (EU) eProcurement System. The NORMAP methodology was used as a baseline. In addition, the NERV methodology baseline results were used for comparison. The research results show that the CEP methodology successfully identified NFRs in 56 out of 57 requirement sentences that contained NFRs compared to 50 of the baseline and 55 of the NERV methodology. The results showed that the CEP methodology was successful in eliciting 98.24% of the baseline compared to the NORMAP methodology of 87.71%. This represents an improvement of 10.53% compared to the baseline results. of The NERV methodology result was 96.49% which represents an improvement of 1.75% for CEP. The CEP methodology successfully elicited 86 out of 88 NFR compared to the baseline NORMAP methodology of 75 and NERV methodology of 82. The NFR count elicitation success for the CEP methodology was 97.73 % compared to NORMAP methodology of 85.24 %which is an improvement of 12.49%. Comparison to the NERV methodology of 93.18%, CEP has an improvement of 4.55%. CEP methodology utilized the associated NFR Metadata (NFRM)/Figures/images and linked them to the related requirements to improve over the NORMAP and NERV methodologies. There were 29 baseline NFRs that were found in the associated Figures/images (NFRM) and 129 NFRs were both in the requirement sentence and the associated Figure/images (NFRM). Another goal of this study was to improve the prioritization of NFRs compared to prior studies. This research provided effective techniques to prioritize NFRs during the early stages of agile software development and the impacts that NFRs have on the software development process. The CEP methodology effectively prioritized NFRs by utilizing the αβγ-framework in a similarly way to FRs. The sub-process of the αβγ-framework was modified in a way that provided a very attractive feature to agile team members. Modification allowed the replacement of parts of the αβγ-framework to suit the team’s specific needs in prioritizing NFRs. The top five requirements based on NFR prioritization were the following: 12.3, 24.5, 15.3, 7.5, and 7.1. The prioritization of NFRs fit the agile software development cycle and allows agile developers and members to plan accordingly to accommodate time and budget constraints

    Interprétation contextuelle et assistée de fonds d'archives numérisées (application à des registres de ventes du XVIIIe siècle)

    Get PDF
    Les fonds d'archives forment de grandes quantités de documents difficiles à interpréter automatiquement : les approches classiques imposent un lourd effort de conception, sans parvenir à empêcher la production d'erreurs qu'il faut corriger après les traitements.Face à ces limites, notre travail vise à améliorer la processus d'interprétation, en conservant un fonctionnement page par page, et en lui apportant des informations contextuelles extraites du fonds documentaire ou fournies par des opérateurs humains.Nous proposons une extension ciblée de la description d'une page qui permet la mise en place systématique d'échanges entre le processus d'interprétation et son environnement. Un mécanisme global itératif gère l'apport progressif d'informations contextuelles à ce processus, ce qui améliore l'interprétation.L'utilisation de ces nouveaux outils pour le traitement de documents du XVIIIe siècle a montré qu'il était facile d'intégrer nos propositions à un système existant, que sa conception restait simple, et que l'effort de correction pouvait être diminué.Fonds, also called historical document collections, are important amounts of digitized documents which are difficult to interpret automatically: usual approaches require a lot of work during design, but do not manage to avoid producing many errors which have to be corrected after processing.To cope with those limitations, our work aimed at improving the interpretation process by making use of information extracted from the fond, or provided by human operators, while keeping a page by page processing.We proposed a simple extension of page description language which permits to automatically generate information exchange between the interpretation process and its environment. A global iterative mechanism progressively brings contextual information to the later process, and improves interpretation.Experiments and application of those new tools for the processing of documents from the 18th century showed that our propositions were easy to integrate in an existing system, that its design is still simple, and that required manual corrections were reduced.RENNES-INSA (352382210) / SudocSudocFranceF

    Qualität in der Inhaltserschließung

    Get PDF
    This edited volume deals with issues relating to the quality of subject cataloging in the digital age, where heterogenous articles from different processes meet, and attempts to define important quality standards. Topics range from metadata and the cataloging policies of the German National Library, the GND, and the head offices of the German library association, to the presentation of a range of different projects, such as QURATOR and SoNAR

    Qualität in der Inhaltserschließung

    Get PDF
    corecore