226 research outputs found

    Transcription enhancement of a digitised multi-lingual pamphlet collection: a case study and guide for similar projects

    Get PDF
    UCL Library Services holds an extensive collection of over 9,000 Jewish pamphlets, many of these extremely rare. Over the past five years, UCL has embarked on a project to widen access to this collection through an extensive programme of cataloguing, conservation and digitisation. With the cataloguing complete and the most fragile items conserved, the focus is now on making these texts available to global audiences via UCL Digital Collections website. The pamphlets were ranked for rarity, significance and fragility and the highest-scoring selected for digitisation. Unique identifiers allocated at the point of cataloguing were used to track individual pamphlets through the stages of the project. This guide details the text-enhancement methods used, highlighting particular issues relating to Hebrew scripts and early-printed texts. Initial attempts to enable images of these pamphlets to be searched digitally relied on the Optical Character Recognition (OCR) embedded within the software used to create the PDF files. Whilst satisfactory for texts chiefly in Roman script, it provided no reliable means to search the extensive corpus of texts in Hebrew. Generous advice offered by the National Library of Israel led to our adoption of ABBYY FineReader software as a means of enhancing the transcriptions embedded within the PDF files. Following image capture, JPEG files were used to create multi-page PDF files of each pamphlet. Pre-processing in ABBYY FineReader consisted of: setting the language and colour mode; detecting page orientation; selecting and refining areas of the text to be read; reading the text to produce a transcription. The resultant files were stored in folders according to language of text. The software highlighted spelling errors and doubtful readings. A verification tool allowed transcribers to correct these as required. However, some erroneous or doubtful readings were nevertheless genuine words and not highlighted; it was therefore essential to proofread the text, particularly for early-printed scripts. Transcribers maintained logs of common errors; additionally, problems with Hebrew vocalisations, cursive and Gothic scripts were noted. During initial quality checks of the transcriptions, many text searches were unsuccessful due to previously unidentified spacings occurring within words. This was generally linked to the font size being too small. Maintaining logs of font sizes used led to the adoption of a minimum of Arial 8 or Times New Roman 10 in transcribed text. The methodology was revised to include the preliminary quality-checking of one page. We concluded that it was difficult to develop a standardised procedure applicable to all texts given the variance in language, script and typography. However, we concluded that the font Arial gave the most successful accuracy ratings for Hebrew script, minimum text size 17, minimum title size 25. ABBYY file preparation took a minimum of 1.5 hours per pamphlet; transcription correction took an average of 10.4 minutes per page; the final quality check took 30 minutes per pamphlet. On average, the work on each pamphlet took a minimum of 6 hours to complete. As a result of the project, average accuracy ratings improved from 60% to 89%, the greatest improvement being for pre-1800 and Hebrew script publications. We are therefore inclined to focus future transcription-enhancement activity on these types of publication for the remainder of our Jewish Pamphlet Collections

    Assessment of OCR Quality and Font Identification in Historical Documents

    Get PDF
    Mass digitization of historical documents is a challenging problem for optical character recognition (OCR) tools. Issues include noisy backgrounds and faded text due to aging, border/marginal noise, bleed-through, skewing, warping, as well as irregular fonts and page layouts. As a result, OCR tools often produce a large number of spurious bounding boxes (BBs) in addition to those that correspond to words in the document. To improve the OCR output, in this thesis we develop machine-learning methods to assess the quality of historical documents and label/tag documents (with the page problems) in the EEBO/ECCO collections—45 million pages available through the Early Modern OCR Project at Texas A&M University. We present an iterative classification algorithm to automatically label BBs (i.e., as text or noise) based on their spatial distribution and geometry. The approach uses a rule-base classifier to generate initial text/noise labels for each BB, followed by an iterative classifier that refines the initial labels by incorporating local information to each BB, its spatial location, shape and size. When evaluated on a dataset containing over 72,000 manually-labeled BBs from 159 historical documents, the algorithm can classify BBs with 0.95 precision and 0.96 recall. Further evaluation on a collection of 6,775 documents with ground-truth transcriptions shows that the algorithm can also be used to predict document quality (0.7 correlation) and improve OCR transcriptions in 85% of the cases. This thesis also aims at generating font metadata for historical documents. Knowledge of the font can aid OCR system to produce very accurate text transcriptions, but getting font information for 45 million documents is a daunting task. We present an active learning based font identification system that can classify document images into fonts. In active learning, a learner queries the human for labels on examples it finds most informative. We capture the characteristics of the fonts using word image features related to character width, angled strokes, and Zernike moments. To extract page level features, we use bag-of-word feature (BoF) model. A font classification model trained using BoF and active learning requires only 443 labeled instances to achieve 89.3% test accuracy

    Text retrieval from early printed books

    Get PDF

    Efficient and effective OCR engine training

    Get PDF
    We present an efficient and effective approach to train OCR engines using the Aletheia document analysis system. All components required for training are seamlessly integrated into Aletheia: training data preparation, the OCR engine’s training processes themselves, text recognition, and quantitative evaluation of the trained engine. Such a comprehensive training and evaluation system, guided through a GUI, allows for iterative incremental training to achieve best results. The widely used Tesseract OCR engine is used as a case study to demonstrate the efficiency and effectiveness of the proposed approach. Experimental results are presented validating the training approach with two different historical datasets, representative of recent significant digitisation projects. The impact of different training strategies and training data requirements is presented in detail

    Automatische Qualitätsverbesserung von Fraktur-Volltexten aus der Retrodigitalisierung am Beispiel der Zeitschrift Die Grenzboten

    Get PDF
    Den Geisteswissenschaften stehen nach und nach mehr computerbasierte Werkzeuge und Infrastrukturen der Digital Humanities zur Verfügung, für die die Existenz und weitere Erstellung von Volltext mit guter Qualität eine unabdingbare Voraussetzung ist. Der Bedarf nach qualitativ hochwertigem Volltext aus Retrodigitalisierungsprojekten steigt daher ständig an. Der zu Frakturschrift berechnete OCR-Volltext hat eine deutlich schlechtere Qualität als von Antiqua-Schrift berechneter.Daher ist für das wissenschaftliche Arbeiten unkorrigierter und unstrukturierter OCR-Volltext von Frakturschrift häufig wertlos. Da eine bedarfsgerechte Erzeugung von Volltext in der Größenordnungvon mehreren Millionen Seiten in Bezug auf Aufwand und Kosten effizient sein sollte, wird hier eine möglichst weitgehende Automatisierung der Nachbearbeitung von OCR-Volltext vorgestellt. An der Staats- und Universitätsbibliothek Bremen (SuUB) wurde dazu ein Ansatz entwickelt, der sich durch Einfachheit auszeichnet: Eine Liste historischer bzw. dialekt- oder fachspezifischer Wortformen – eine der Voraussetzungen dieses Ansatzes – ist verhältnismäßig leicht erstellbar. Eineffizienter Algorithmus leistet den Abgleich von hier ca. 1,7 Millionen Wortformen gegen bei der Zeitschrift Die Grenzboten knapp 80 Millionen enthaltenen Wörtern und lässt sich auf verständliche und nachvollziehbare Art und Weise parametrisieren, d.h. auf die spezifischen Eigenschaften des jeweiligen Volltextprojektes einstellen. Die erreichbaren Ergebnisse sind stark abhängig von der Ausgangsqualität des Volltextes sowie von dem Umfang und der Qualität der Liste der historischen Wortformen und dem verwendeten Fehlermodell. So können beispielsweise bestimmte Fehler nur mit einem den Kontext berücksichtigenden Ansatz korrigiert werden. Weiterhin wurde zusammen mit der Firma ProjectComputing mit Sitz in Canberra, Australien, der cloud service overProof1 umdie Funktionalität der Nachkorrektur deutschsprachiger Frakturschrift erweitert. In einem Ausblick werden Bedarfe und Möglichkeiten für die Zukunft aufgezeigt.Gradually, the humanities are provided with a number of computer based tools and scientific infrastructures of the digital humanities. As digital full text is strongly needed for these tools and infrastructures, the demand for high-quality full texts is constantly rising. OCRed full text from Gothic typeface texts is of considerably worse quality than OCRed full text from Antiqua. The value of uncorrected and unstructured OCR full text is fairly low. As multiple millions of pages need to be processed, the method should be efficient with respect to expenditure and costs. Therefore, we introduce an almost fully automated approach for the post correction of OCR full text. The approach developed at the Staats- und Universitätsbibliothek Bremen (SuUB) is a straightforward one. One of the requirements, a list of historical word forms, was easily generated. An efficient algorithm carries out the matching of 1,7 million word forms against almost 80 million words taken from the historical journal Die Grenzboten. The parametrization of the algorithm, i.e. the adaption to the specific requirements of the full text project, is comprehensible and easy to understand. The results which can be achieved strongly depend on the initial quality of the full text, the dimension and quality of the list of historical word forms and the error model applied. For example, specific types of errors can only be corrected by taking context information into account. Furthermore, the cloud service overProof was enhanced by the ability to correct German Gothic typeset. This was done in a cooperation with the Australian company ProjectComputing. In the discussion, requirements and options for the future are presented

    An OCR Post-correction Approach using Deep Learning for Processing Medical Reports

    Get PDF
    According to a recent Deloitte study, the COVID-19 pandemic continues to place a huge strain on the global health care sector. Covid-19 has also catalysed digital transformation across the sector for improving operational efficiencies. As a result, the amount of digitally stored patient data such as discharge letters, scan images, test results or free text entries by doctors has grown significantly. In 2020, 2314 exabytes of medical data was generated globally. This medical data does not conform to a generic structure and is mostly in the form of unstructured digitally generated or scanned paper documents stored as part of a patient’s medical reports. This unstructured data is digitised using Optical Character Recognition (OCR) process. A key challenge here is that the accuracy of the OCR process varies due to the inability of current OCR engines to correctly transcribe scanned or handwritten documents in which text may be skewed, obscured or illegible. This is compounded by the fact that processed text is comprised of specific medical terminologies that do not necessarily form part of general language lexicons. The proposed work uses a deep neural network based self-supervised pre-training technique: Robustly Optimized Bidirectional Encoder Representations from Transformers (RoBERTa) that can learn to predict hidden (masked) sections of text to fill in the gaps of non-transcribable parts of the documents being processed. Evaluating the proposed method on domain-specific datasets which include real medical documents, shows a significantly reduced word error rate demonstrating the effectiveness of the approach

    Measuring the Correctness of Double-Keying: Error Classification and Quality Control in a Large Corpus of TEI-Annotated Historical Text

    Get PDF
    Among mass digitization methods, double-keying is considered to be the one with the lowest error rate. This method requires two independent transcriptions of a text by two different operators. It is particularly well suited to historical texts, which often exhibit deficiencies like poor master copies or other difficulties such as spelling variation or complex text structures. Providers of data entry services using the double-keying method generally advertise very high accuracy rates (around 99.95% to 99.98%). These advertised percentages are generally estimated on the basis of small samples, and little if anything is said about either the actual amount of text or the text genres which have been proofread, about error types, proofreaders, etc. In order to obtain significant data on this problem it is necessary to analyze a large amount of text representing a balanced sample of different text types, to distinguish the structural XML/TEI level from the typographical level, and to differentiate between various types of errors which may originate from different sources and may not be equally severe. This paper presents an extensive and complex approach to the analysis and correction of double-keying errors which has been applied by the DFG-funded project "Deutsches Textarchiv" (German Text Archive, hereafter DTA) in order to evaluate and preferably to increase the transcription and annotation accuracy of double-keyed DTA texts. Statistical analyses of the results gained from proofreading a large quantity of text are presented, which verify the common accuracy rates for the double-keying method

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    How to digitize objects?

    Get PDF
    • …
    corecore