3,943 research outputs found
Profiling of OCR'ed Historical Texts Revisited
In the absence of ground truth it is not possible to automatically determine
the exact spectrum and occurrences of OCR errors in an OCR'ed text. Yet, for
interactive postcorrection of OCR'ed historical printings it is extremely
useful to have a statistical profile available that provides an estimate of
error classes with associated frequencies, and that points to conjectured
errors and suspicious tokens. The method introduced in Reffle (2013) computes
such a profile, combining lexica, pattern sets and advanced matching techniques
in a specialized Expectation Maximization (EM) procedure. Here we improve this
method in three respects: First, the method in Reffle (2013) is not adaptive:
user feedback obtained by actual postcorrection steps cannot be used to compute
refined profiles. We introduce a variant of the method that is open for
adaptivity, taking correction steps of the user into account. This leads to
higher precision with respect to recognition of erroneous OCR tokens. Second,
during postcorrection often new historical patterns are found. We show that
adding new historical patterns to the linguistic background resources leads to
a second kind of improvement, enabling even higher precision by telling
historical spellings apart from OCR errors. Third, the method in Reffle (2013)
does not make any active use of tokens that cannot be interpreted in the
underlying channel model. We show that adding these uninterpretable tokens to
the set of conjectured errors leads to a significant improvement of the recall
for error detection, at the same time improving precision
Semantics-Based Content Extraction in Typewritten Historical Documents
This paper presents a flexible approach to extracting content from scanned historical documents using semantic information. The final electronic document is the result of a "digital historical document lifecycle" process, where the expert knowledge of the historian/archivist user is incorporated at different stages. Results show that such a conversion strategy aided by (expert) user-specified semantic information and which enables the processing of individual parts of the document in a specialised way, produces superior (in a variety of significant ways) results than document analysis and understanding techniques devised for contemporary documents
Learning Surrogate Models of Document Image Quality Metrics for Automated Document Image Processing
Computation of document image quality metrics often depends upon the
availability of a ground truth image corresponding to the document. This limits
the applicability of quality metrics in applications such as hyperparameter
optimization of image processing algorithms that operate on-the-fly on unseen
documents. This work proposes the use of surrogate models to learn the behavior
of a given document quality metric on existing datasets where ground truth
images are available. The trained surrogate model can later be used to predict
the metric value on previously unseen document images without requiring access
to ground truth images. The surrogate model is empirically evaluated on the
Document Image Binarization Competition (DIBCO) and the Handwritten Document
Image Binarization Competition (H-DIBCO) datasets
Text Line Segmentation of Historical Documents: a Survey
There is a huge amount of historical documents in libraries and in various
National Archives that have not been exploited electronically. Although
automatic reading of complete pages remains, in most cases, a long-term
objective, tasks such as word spotting, text/image alignment, authentication
and extraction of specific fields are in use today. For all these tasks, a
major step is document segmentation into text lines. Because of the low quality
and the complexity of these documents (background noise, artifacts due to
aging, interfering lines),automatic text line segmentation remains an open
research field. The objective of this paper is to present a survey of existing
methods, developed during the last decade, and dedicated to documents of
historical interest.Comment: 25 pages, submitted version, To appear in International Journal on
Document Analysis and Recognition, On line version available at
http://www.springerlink.com/content/k2813176280456k3
Rerunning OCR: A Machine Learning Approach to Quality Assessment and Enhancement Prediction
Iterating with new and improved OCR solutions enforces decision making when
it comes to targeting the right candidates for reprocessing. This especially
applies when the underlying data collection is of considerable size and rather
diverse in terms of fonts, languages, periods of publication and consequently
OCR quality. This article captures the efforts of the National Library of
Luxembourg to support those targeting decisions. They are crucial in order to
guarantee low computational overhead and reduced quality degradation risks,
combined with a more quantifiable OCR improvement. In particular, this work
explains the methodology of the library with respect to text block level
quality assessment. Through extension of this technique, a regression model,
that is able to take into account the enhancement potential of a new OCR
engine, is also presented. They both mark promising approaches, especially for
cultural institutions dealing with historical data of lower quality.Comment: Journal of Data Mining and Digital Humanities; Major revisio
Assessment of OCR Quality and Font Identification in Historical Documents
Mass digitization of historical documents is a challenging problem for optical character recognition (OCR) tools. Issues include noisy backgrounds and faded text due to aging, border/marginal noise, bleed-through, skewing, warping, as well as irregular fonts and page layouts. As a result, OCR tools often produce a large number of spurious bounding boxes (BBs) in addition to those that correspond to words in the document. To improve the OCR output, in this thesis we develop machine-learning methods to assess the quality of historical documents and label/tag documents (with the page problems) in the EEBO/ECCO collections—45 million pages available through the Early Modern OCR Project at Texas A&M University.
We present an iterative classification algorithm to automatically label BBs (i.e., as text or noise) based on their spatial distribution and geometry. The approach uses a rule-base classifier to generate initial text/noise labels for each BB, followed by an iterative classifier that refines the initial labels by incorporating local information to each BB, its spatial location, shape and size. When evaluated on a dataset containing over 72,000 manually-labeled BBs from 159 historical documents, the algorithm can classify BBs with 0.95 precision and 0.96 recall. Further evaluation on a collection of 6,775 documents with ground-truth transcriptions shows that the algorithm can also be used to predict document quality (0.7 correlation) and improve OCR transcriptions in 85% of the cases.
This thesis also aims at generating font metadata for historical documents. Knowledge of the font can aid OCR system to produce very accurate text transcriptions, but getting font information for 45 million documents is a daunting task. We present an active learning based font identification system that can classify document images into fonts. In active learning, a learner queries the human for labels on examples it finds most informative. We capture the characteristics of the fonts using word image features related to character width, angled strokes, and Zernike moments. To extract page level features, we use bag-of-word feature (BoF) model. A font classification model trained using BoF and active learning requires only 443 labeled instances to achieve 89.3% test accuracy
OCR Quality Affects Perceived Usefulness of Historical Newspaper Clippings. A User Study
Publisher Copyright: © 2022 Copyright for this paper by its authors.Effects of Optical Character Recognition (OCR) quality on historical information retrieval have so far been studied in data-oriented scenarios regarding the effectiveness of retrieval results. Such studies have either focused on the effects of artificially degraded OCR quality (see, e.g., [1-2]) or utilized test collections containing texts based on authentic low quality OCR data (see, e.g., [3]). In this paper the effects of OCR quality are studied in a user-oriented information retrieval setting. Thirty-two users evaluated subjectively query results of six topics each (out of 30 topics) based on pre-formulated queries using a simulated work task setting. To the best of our knowledge our simulated work task experiment is the first one showing empirically that users' subjective relevance assessments of retrieved documents are affected by a change in the quality of optically read text. Users of historical newspaper collections have so far commented effects of OCR'ed data quality mainly in impressionistic ways, and controlled user environments for studying effects of OCR quality on users' relevance assessments of the retrieval results have so far been missing. To remedy this The National Library of Finland (NLF) set up an experimental query environment for the contents of one Finnish historical newspaper, Uusi Suometar 1869-1918, to be able to compare users' evaluation of search results of two different OCR qualities for digitized newspaper articles. The query interface was able to present the same underlying document for the user based on two alternatives: either based on the lower OCR quality, or based on the higher OCR quality, and the choice was randomized. The users did not know about quality differences in the article texts they evaluated. The main result of the study is that improved optical character recognition quality affects perceived usefulness of historical newspaper articles significantly. The mean average evaluation score for the improved OCR results was 7.94% higher than the mean average evaluation score of the old OCR results.Peer reviewe
- …