1,342 research outputs found
Text-line extraction from handwritten document images using GAN
Text-line extraction (TLE) from unconstrained handwritten document images is still considered an open research problem. Literature survey reveals that use of various rule-based methods is commonplace in this regard. But these methods mostly fail when the document images have touching and/or multi-skewed text lines or overlapping words/characters and non-uniform inter-line space. To encounter this problem, in this paper, we have used a deep learning-based method. In doing so, we have, for the first time in the literature, applied Generative Adversarial Networks (GANs) where we have considered TLE as image-to-image translation task. We have used U-Net architecture for the Generator, and Patch GAN architecture for the discriminator with different combinations of loss functions namely GAN loss, L1 loss and L2 loss. Evaluation is done on two datasets: handwritten Chinese text dataset HIT-MW and ICDAR 2013 Handwritten Segmentation Contest dataset. After exhaustive experimentations, it has been observed that U-Net architecture with combination of the said three losses not only produces impressive results but also outperforms some state-of-the-art methods.Partially supported by the CMATER research laboratory of the Computer Science and Engineering Department, Jadavpur University, India, and the co-author Ram Sarkar is partially funded by DST grant (EMR/2016/007213).http://www.elsevier.com/locate/eswa2021-02-01hj2019Computer Scienc
Advanced document data extraction techniques to improve supply chain performance
In this thesis, a novel machine learning technique to extract text-based information from scanned images has been developed. This information extraction is performed in the context of scanned invoices and bills used in financial transactions. These financial transactions contain a considerable amount of data that must be extracted, refined, and stored digitally before it can be used for analysis. Converting this data into a digital format is often a time-consuming process. Automation and data optimisation show promise as methods for reducing the time required and the cost of Supply Chain Management (SCM) processes, especially Supplier Invoice Management (SIM), Financial Supply Chain Management (FSCM) and Supply Chain procurement processes. This thesis uses a cross-disciplinary approach involving Computer Science and Operational Management to explore the benefit of automated invoice data extraction in business and its impact on SCM. The study adopts a multimethod approach based on empirical research, surveys, and interviews performed on selected companies.The expert system developed in this thesis focuses on two distinct areas of research: Text/Object Detection and Text Extraction. For Text/Object Detection, the Faster R-CNN model was analysed. While this model yields outstanding results in terms of object detection, it is limited by poor performance when image quality is low. The Generative Adversarial Network (GAN) model is proposed in response to this limitation. The GAN model is a generator network that is implemented with the help of the Faster R-CNN model and a discriminator that relies on PatchGAN. The output of the GAN model is text data with bonding boxes. For text extraction from the bounding box, a novel data extraction framework consisting of various processes including XML processing in case of existing OCR engine, bounding box pre-processing, text clean up, OCR error correction, spell check, type check, pattern-based matching, and finally, a learning mechanism for automatizing future data extraction was designed. Whichever fields the system can extract successfully are provided in key-value format.The efficiency of the proposed system was validated using existing datasets such as SROIE and VATI. Real-time data was validated using invoices that were collected by two companies that provide invoice automation services in various countries. Currently, these scanned invoices are sent to an OCR system such as OmniPage, Tesseract, or ABBYY FRE to extract text blocks and later, a rule-based engine is used to extract relevant data. While the system’s methodology is robust, the companies surveyed were not satisfied with its accuracy. Thus, they sought out new, optimized solutions. To confirm the results, the engines were used to return XML-based files with text and metadata identified. The output XML data was then fed into this new system for information extraction. This system uses the existing OCR engine and a novel, self-adaptive, learning-based OCR engine. This new engine is based on the GAN model for better text identification. Experiments were conducted on various invoice formats to further test and refine its extraction capabilities. For cost optimisation and the analysis of spend classification, additional data were provided by another company in London that holds expertise in reducing their clients' procurement costs. This data was fed into our system to get a deeper level of spend classification and categorisation. This helped the company to reduce its reliance on human effort and allowed for greater efficiency in comparison with the process of performing similar tasks manually using excel sheets and Business Intelligence (BI) tools.The intention behind the development of this novel methodology was twofold. First, to test and develop a novel solution that does not depend on any specific OCR technology. Second, to increase the information extraction accuracy factor over that of existing methodologies. Finally, it evaluates the real-world need for the system and the impact it would have on SCM. This newly developed method is generic and can extract text from any given invoice, making it a valuable tool for optimizing SCM. In addition, the system uses a template-matching approach to ensure the quality of the extracted information
HWD: A Novel Evaluation Score for Styled Handwritten Text Generation
Styled Handwritten Text Generation (Styled HTG) is an important task in
document analysis, aiming to generate text images with the handwriting of given
reference images. In recent years, there has been significant progress in the
development of deep learning models for tackling this task. Being able to
measure the performance of HTG models via a meaningful and representative
criterion is key for fostering the development of this research topic. However,
despite the current adoption of scores for natural image generation evaluation,
assessing the quality of generated handwriting remains challenging. In light of
this, we devise the Handwriting Distance (HWD), tailored for HTG evaluation. In
particular, it works in the feature space of a network specifically trained to
extract handwriting style features from the variable-lenght input images and
exploits a perceptual distance to compare the subtle geometric features of
handwriting. Through extensive experimental evaluation on different word-level
and line-level datasets of handwritten text images, we demonstrate the
suitability of the proposed HWD as a score for Styled HTG. The pretrained model
used as backbone will be released to ease the adoption of the score, aiming to
provide a valuable tool for evaluating HTG models and thus contributing to
advancing this important research area.Comment: Accepted at BMVC202
Machine Learning for handwriting text recognition in historical documents
Olmos
ABSTRACT
In this thesis, we focus on the handwriting text recognition task over historical
documents that are difficult to read for any person that is not an expert in ancient
languages and writing style.
We aim to take advantage and improve the neural networks architectures and
techniques that other authors are proposing for handwriting text recognition in
modern handwritten documents. These models perform this task very precisely
when a large amount of data is available. However, the low availability of labeled
data is a widespread problem in historical documents. The type of writing is
singular, and it is pretty expensive to hire an expert to transcribe a large number
of pages.
After investigating and analyzing the state-of-the-art, we propose the efficient
application of methods such as transfer learning and data augmentation. We also
contribute an algorithm for purging mislabeled samples that affect the learning of
models. Finally, we develop a variational auto encoder method for generating
synthetic samples of handwritten text images for data augmentation.
Experiments are performed on various historical handwritten text databases to
validate the performance of the proposed algorithms. The various included
analyses focus on the evolution of the character and word error rate (CER and
WER) as we increase the training dataset.
One of the most important results is the participation in a contest for transcription
of historical handwritten text. The organizers provided us with a dataset of
documents to train the model, then just a few labeled pages of 5 new documents
were handled to adjust the solution further. Finally, the transcription of nonlabeled
images was requested to evaluate the algorithm. Our method raked
second in this contest
Exploring OCR Capabilities of GPT-4V(ision) : A Quantitative and In-depth Evaluation
This paper presents a comprehensive evaluation of the Optical Character
Recognition (OCR) capabilities of the recently released GPT-4V(ision), a Large
Multimodal Model (LMM). We assess the model's performance across a range of OCR
tasks, including scene text recognition, handwritten text recognition,
handwritten mathematical expression recognition, table structure recognition,
and information extraction from visually-rich document. The evaluation reveals
that GPT-4V performs well in recognizing and understanding Latin contents, but
struggles with multilingual scenarios and complex tasks. Specifically, it
showed limitations when dealing with non-Latin languages and complex tasks such
as handwriting mathematical expression recognition, table structure
recognition, and end-to-end semantic entity recognition and pair extraction
from document image. Based on these observations, we affirm the necessity and
continued research value of specialized OCR models. In general, despite its
versatility in handling diverse OCR tasks, GPT-4V does not outperform existing
state-of-the-art OCR models. How to fully utilize pre-trained general-purpose
LMMs such as GPT-4V for OCR downstream tasks remains an open problem. The study
offers a critical reference for future research in OCR with LMMs. Evaluation
pipeline and results are available at
https://github.com/SCUT-DLVCLab/GPT-4V_OCR
- …