935 research outputs found
Boosting Handwriting Text Recognition in Small Databases with Transfer Learning
In this paper we deal with the offline handwriting text recognition (HTR)
problem with reduced training datasets. Recent HTR solutions based on
artificial neural networks exhibit remarkable solutions in referenced
databases. These deep learning neural networks are composed of both
convolutional (CNN) and long short-term memory recurrent units (LSTM). In
addition, connectionist temporal classification (CTC) is the key to avoid
segmentation at character level, greatly facilitating the labeling task. One of
the main drawbacks of the CNNLSTM-CTC (CLC) solutions is that they need a
considerable part of the text to be transcribed for every type of calligraphy,
typically in the order of a few thousands of lines. Furthermore, in some
scenarios the text to transcribe is not that long, e.g. in the Washington
database. The CLC typically overfits for this reduced number of training
samples. Our proposal is based on the transfer learning (TL) from the
parameters learned with a bigger database. We first investigate, for a reduced
and fixed number of training samples, 350 lines, how the learning from a large
database, the IAM, can be transferred to the learning of the CLC of a reduced
database, Washington. We focus on which layers of the network could be not
re-trained. We conclude that the best solution is to re-train the whole CLC
parameters initialized to the values obtained after the training of the CLC
from the larger database. We also investigate results when the training size is
further reduced. The differences in the CER are more remarkable when training
with just 350 lines, a CER of 3.3% is achieved with TL while we have a CER of
18.2% when training from scratch. As a byproduct, the learning times are quite
reduced. Similar good results are obtained from the Parzival database when
trained with this reduced number of lines and this new approach.Comment: ICFHR 2018 Conferenc
Machine Learning for handwriting text recognition in historical documents
Olmos
ABSTRACT
In this thesis, we focus on the handwriting text recognition task over historical
documents that are difficult to read for any person that is not an expert in ancient
languages and writing style.
We aim to take advantage and improve the neural networks architectures and
techniques that other authors are proposing for handwriting text recognition in
modern handwritten documents. These models perform this task very precisely
when a large amount of data is available. However, the low availability of labeled
data is a widespread problem in historical documents. The type of writing is
singular, and it is pretty expensive to hire an expert to transcribe a large number
of pages.
After investigating and analyzing the state-of-the-art, we propose the efficient
application of methods such as transfer learning and data augmentation. We also
contribute an algorithm for purging mislabeled samples that affect the learning of
models. Finally, we develop a variational auto encoder method for generating
synthetic samples of handwritten text images for data augmentation.
Experiments are performed on various historical handwritten text databases to
validate the performance of the proposed algorithms. The various included
analyses focus on the evolution of the character and word error rate (CER and
WER) as we increase the training dataset.
One of the most important results is the participation in a contest for transcription
of historical handwritten text. The organizers provided us with a dataset of
documents to train the model, then just a few labeled pages of 5 new documents
were handled to adjust the solution further. Finally, the transcription of nonlabeled
images was requested to evaluate the algorithm. Our method raked
second in this contest
Unsupervised Adaptation for Synthetic-to-Real Handwritten Word Recognition
Handwritten Text Recognition (HTR) is still a challenging problem because it
must deal with two important difficulties: the variability among writing
styles, and the scarcity of labelled data. To alleviate such problems,
synthetic data generation and data augmentation are typically used to train HTR
systems. However, training with such data produces encouraging but still
inaccurate transcriptions in real words. In this paper, we propose an
unsupervised writer adaptation approach that is able to automatically adjust a
generic handwritten word recognizer, fully trained with synthetic fonts,
towards a new incoming writer. We have experimentally validated our proposal
using five different datasets, covering several challenges (i) the document
source: modern and historic samples, which may involve paper degradation
problems; (ii) different handwriting styles: single and multiple writer
collections; and (iii) language, which involves different character
combinations. Across these challenging collections, we show that our system is
able to maintain its performance, thus, it provides a practical and generic
approach to deal with new document collections without requiring any expensive
and tedious manual annotation step.Comment: Accepted to WACV 202
Cutting the Error by Half: Investigation of Very Deep CNN and Advanced Training Strategies for Document Image Classification
We present an exhaustive investigation of recent Deep Learning architectures,
algorithms, and strategies for the task of document image classification to
finally reduce the error by more than half. Existing approaches, such as the
DeepDocClassifier, apply standard Convolutional Network architectures with
transfer learning from the object recognition domain. The contribution of the
paper is threefold: First, it investigates recently introduced very deep neural
network architectures (GoogLeNet, VGG, ResNet) using transfer learning (from
real images). Second, it proposes transfer learning from a huge set of document
images, i.e. 400,000 documents. Third, it analyzes the impact of the amount of
training data (document images) and other parameters to the classification
abilities. We use two datasets, the Tobacco-3482 and the large-scale RVL-CDIP
dataset. We achieve an accuracy of 91.13% for the Tobacco-3482 dataset while
earlier approaches reach only 77.6%. Thus, a relative error reduction of more
than 60% is achieved. For the large dataset RVL-CDIP, an accuracy of 90.97% is
achieved, corresponding to a relative error reduction of 11.5%
Learning to Read L'Infinito: Handwritten Text Recognition with Synthetic Training Data
Deep learning-based approaches to Handwritten Text Recognition (HTR) have shown remarkable results on publicly available large datasets, both modern and historical. However, it is often the case that historical manuscripts are preserved in small collections, most of the time with unique characteristics in terms of paper support, author handwriting style, and language. State-of-the-art HTR approaches struggle to obtain good performance on such small manuscript collections, for which few training samples are available. In this paper, we focus on HTR on small historical datasets and propose a new historical dataset, which we call Leopardi, with the typical characteristics of small manuscript collections, consisting of letters by the poet Giacomo Leopardi, and devise strategies to deal with the training data scarcity scenario. In particular, we explore the use of carefully designed but cost-effective synthetic data for pre-training HTR models to be applied to small single-author manuscripts. Extensive experiments validate the suitability of the proposed approach, and both the Leopardi dataset and synthetic data will be available to favor further research in this direction
Weighting Knowledge Sources to Facilitate User Input
Generally, the present disclosure is directed to weighting one or more knowledge sources used to determine user input. In particular, in some implementations, the systems and methods of the present disclosure can include or otherwise leverage one or more machine-learned models to predict a weight for a knowledge source based on user input data and/or output of one or more knowledge sources
Comparison of CNN-Learned vs. Handcrafted Features for Detection of Parkinson's Disease Dysgraphia in a Multilingual Dataset
Parkinson's disease dysgraphia (PDYS), one of the earliest signs of Parkinson's disease (PD), has been researched as a promising biomarker of PD and as the target of a noninvasive and inexpensive approach to monitoring the progress of the disease. However, although several approaches to supportive PDYS diagnosis have been proposed (mainly based on handcrafted features (HF) extracted from online handwriting or the utilization of deep neural networks), it remains unclear which approach provides the highest discrimination power and how these approaches can be transferred between different datasets and languages. This study aims to compare classification performance based on two types of features: features automatically extracted by a pretrained convolutional neural network (CNN) and HF designed by human experts. Both approaches are evaluated on a multilingual dataset collected from 143 PD patients and 151 healthy controls in the Czech Republic, United States, Colombia, and Hungary. The subjects performed the spiral drawing task (SDT; a language-independent task) and the sentence writing task (SWT; a language-dependent task). Models based on logistic regression and gradient boosting were trained in several scenarios, specifically single language (SL), leave one language out (LOLO), and all languages combined (ALC). We found that the HF slightly outperformed the CNN-extracted features in all considered evaluation scenarios for the SWT. In detail, the following balanced accuracy (BACC) scores were achieved: SL—0.65 (HF), 0.58 (CNN); LOLO—0.65 (HF), 0.57 (CNN); and ALC—0.69 (HF), 0.66 (CNN). However, in the case of the SDT, features extracted by a CNN provided competitive results: SL—0.66 (HF), 0.62 (CNN); LOLO—0.56 (HF), 0.54 (CNN); and ALC—0.60 (HF), 0.60 (CNN). In summary, regarding the SWT, the HF outperformed the CNN-extracted features over 6% (mean BACC of 0.66 for HF, and 0.60 for CNN). In the case of the SDT, both feature sets provided almost identical classification performance (mean BACC of 0.60 for HF, and 0.58 for CNN). Copyright © 2022 Galaz, Drotar, Mekyska, Gazda, Mucha, Zvoncak, Smekal, Faundez-Zanuy, Castrillon, Orozco-Arroyave, Rapcsak, Kincses, Brabenec and Rektorova
Adversarial Data Programming: Using GANs to Relax the Bottleneck of Curated Labeled Data
Paucity of large curated hand-labeled training data for every
domain-of-interest forms a major bottleneck in the deployment of machine
learning models in computer vision and other fields. Recent work (Data
Programming) has shown how distant supervision signals in the form of labeling
functions can be used to obtain labels for given data in near-constant time. In
this work, we present Adversarial Data Programming (ADP), which presents an
adversarial methodology to generate data as well as a curated aggregated label
has given a set of weak labeling functions. We validated our method on the
MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many
state-of-the-art models. We conducted extensive experiments to study its
usefulness, as well as showed how the proposed ADP framework can be used for
transfer learning as well as multi-task learning, where data from two domains
are generated simultaneously using the framework along with the label
information. Our future work will involve understanding the theoretical
implications of this new framework from a game-theoretic perspective, as well
as explore the performance of the method on more complex datasets.Comment: CVPR 2018 main conference pape
- …