8,224 research outputs found
Encoding models for scholarly literature
We examine the issue of digital formats for document encoding, archiving and
publishing, through the specific example of "born-digital" scholarly journal
articles. We will begin by looking at the traditional workflow of journal
editing and publication, and how these practices have made the transition into
the online domain. We will examine the range of different file formats in which
electronic articles are currently stored and published. We will argue strongly
that, despite the prevalence of binary and proprietary formats such as PDF and
MS Word, XML is a far superior encoding choice for journal articles. Next, we
look at the range of XML document structures (DTDs, Schemas) which are in
common use for encoding journal articles, and consider some of their strengths
and weaknesses. We will suggest that, despite the existence of specialized
schemas intended specifically for journal articles (such as NLM), and more
broadly-used publication-oriented schemas such as DocBook, there are strong
arguments in favour of developing a subset or customization of the Text
Encoding Initiative (TEI) schema for the purpose of journal-article encoding;
TEI is already in use in a number of journal publication projects, and the
scale and precision of the TEI tagset makes it particularly appropriate for
encoding scholarly articles. We will outline the document structure of a
TEI-encoded journal article, and look in detail at suggested markup patterns
for specific features of journal articles
A Comparative Study of Two State-of-the-Art Feature Selection Algorithms for Texture-Based Pixel-Labeling Task of Ancient Documents
International audienceRecently, texture features have been widely used for historical document image analysis. However, few studies have focused exclusively on feature selection algorithms for historical document image analysis. Indeed, an important need has emerged to use a feature selection algorithm in data mining and machine learning tasks, since it helps to reduce the data dimensionality and to increase the algorithm performance such as a pixel classification algorithm. Therefore, in this paper we propose a comparative study of two conventional feature selection algorithms, genetic algorithm and ReliefF algorithm, using a classical pixel-labeling scheme based on analyzing and selecting texture features. The two assessed feature selection algorithms in this study have been applied on a training set of the HBR dataset in order to deduce the most selected texture features of each analyzed texture-based feature set. The evaluated feature sets in this study consist of numerous state-of-the-art texture features (Tamura, local binary patterns, gray-level run-length matrix, auto-correlation function, gray-level co-occurrence matrix, Gabor filters, Three-level Haar wavelet transform, three-level wavelet transform using 3-tap Daubechies filter and three-level wavelet transform using 4-tap Daubechies filter). In our experiments, a public corpus of historical document images provided in the context of the historical book recognition contest (HBR2013 dataset: PRImA, Salford, UK) has been used. Qualitative and numerical experiments are given in this study in order to provide a set of comprehensive guidelines on the strengths and the weaknesses of each assessed feature selection algorithm according to the used texture feature set
Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts
Historical palm-leaf manuscript and early paper documents from Indian
subcontinent form an important part of the world's literary and cultural
heritage. Despite their importance, large-scale annotated Indic manuscript
image datasets do not exist. To address this deficiency, we introduce
Indiscapes, the first ever dataset with multi-regional layout annotations for
historical Indic manuscripts. To address the challenge of large diversity in
scripts and presence of dense, irregular layout elements (e.g. text lines,
pictures, multiple documents per image), we adapt a Fully Convolutional Deep
Neural Network architecture for fully automatic, instance-level spatial layout
parsing of manuscript images. We demonstrate the effectiveness of proposed
architecture on images from the Indiscapes dataset. For annotation flexibility
and keeping the non-technical nature of domain experts in mind, we also
contribute a custom, web-based GUI annotation tool and a dashboard-style
analytics portal. Overall, our contributions set the stage for enabling
downstream applications such as OCR and word-spotting in historical Indic
manuscripts at scale.Comment: Oral presentation at International Conference on Document Analysis
and Recognition (ICDAR) - 2019. For dataset, pre-trained networks and
additional details, visit project page at http://ihdia.iiit.ac.in
Historical Document Analysis
Scanned documents are a rich source of various information that can be processed
utilizing a document analysis system. Such a system covers the areas of Machine
Learning, Computer Vision and Natural Language Processing. In the thesis, these
areas are covered with a focus on common and state-of-the-art approaches applicable
to historical document analysis, which is still challenging due to several difficulties
such as handwritten text. Finally, the current research results and aims of the future
doctoral thesis are presented
Digital Image Access & Retrieval
The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
Translation Alignment Applied to Historical Languages: methods, evaluation, applications, and visualization
Translation alignment is an essential task in Digital Humanities and Natural
Language Processing, and it aims to link words/phrases in the source
text with their translation equivalents in the translation. In addition to
its importance in teaching and learning historical languages, translation
alignment builds bridges between ancient and modern languages through
which various linguistics annotations can be transferred. This thesis focuses
on word-level translation alignment applied to historical languages in general
and Ancient Greek and Latin in particular. As the title indicates, the thesis
addresses four interdisciplinary aspects of translation alignment.
The starting point was developing Ugarit, an interactive annotation tool
to perform manual alignment aiming to gather training data to train an
automatic alignment model. This effort resulted in more than 190k accurate
translation pairs that I used for supervised training later. Ugarit has been
used by many researchers and scholars also in the classroom at several
institutions for teaching and learning ancient languages, which resulted
in a large, diverse crowd-sourced aligned parallel corpus allowing us to
conduct experiments and qualitative analysis to detect recurring patterns in
annotators’ alignment practice and the generated translation pairs.
Further, I employed the recent advances in NLP and language modeling to
develop an automatic alignment model for historical low-resourced languages,
experimenting with various training objectives and proposing a training
strategy for historical languages that combines supervised and unsupervised
training with mono- and multilingual texts. Then, I integrated this alignment
model into other development workflows to project cross-lingual annotations
and induce bilingual dictionaries from parallel corpora.
Evaluation is essential to assess the quality of any model. To ensure employing the best practice, I reviewed the current evaluation procedure, defined
its limitations, and proposed two new evaluation metrics. Moreover, I introduced a visual analytics framework to explore and inspect alignment gold
standard datasets and support quantitative and qualitative evaluation of
translation alignment models. Besides, I designed and implemented visual
analytics tools and reading environments for parallel texts and proposed
various visualization approaches to support different alignment-related tasks
employing the latest advances in information visualization and best practice.
Overall, this thesis presents a comprehensive study that includes manual and
automatic alignment techniques, evaluation methods and visual analytics
tools that aim to advance the field of translation alignment for historical
languages
You Actually Look Twice At it (YALTAi): using an object detection approach instead of region segmentation within the Kraken engine
Layout Analysis (the identification of zones and their classification) is the
first step along line segmentation in Optical Character Recognition and similar
tasks. The ability of identifying main body of text from marginal text or
running titles makes the difference between extracting the work full text of a
digitized book and noisy outputs. We show that most segmenters focus on pixel
classification and that polygonization of this output has not been used as a
target for the latest competition on historical document (ICDAR 2017 and
onwards), despite being the focus in the early 2010s. We propose to shift, for
efficiency, the task from a pixel classification-based polygonization to an
object detection using isothetic rectangles. We compare the output of Kraken
and YOLOv5 in terms of segmentation and show that the later severely
outperforms the first on small datasets (1110 samples and below). We release
two datasets for training and evaluation on historical documents as well as a
new package, YALTAi, which injects YOLOv5 in the segmentation pipeline of
Kraken 4.1
- …