121 research outputs found

    Advancements and Challenges in Arabic Optical Character Recognition: A Comprehensive Survey

    Full text link
    Optical character recognition (OCR) is a vital process that involves the extraction of handwritten or printed text from scanned or printed images, converting it into a format that can be understood and processed by machines. This enables further data processing activities such as searching and editing. The automatic extraction of text through OCR plays a crucial role in digitizing documents, enhancing productivity, improving accessibility, and preserving historical records. This paper seeks to offer an exhaustive review of contemporary applications, methodologies, and challenges associated with Arabic Optical Character Recognition (OCR). A thorough analysis is conducted on prevailing techniques utilized throughout the OCR process, with a dedicated effort to discern the most efficacious approaches that demonstrate enhanced outcomes. To ensure a thorough evaluation, a meticulous keyword-search methodology is adopted, encompassing a comprehensive analysis of articles relevant to Arabic OCR, including both backward and forward citation reviews. In addition to presenting cutting-edge techniques and methods, this paper critically identifies research gaps within the realm of Arabic OCR. By highlighting these gaps, we shed light on potential areas for future exploration and development, thereby guiding researchers toward promising avenues in the field of Arabic OCR. The outcomes of this study provide valuable insights for researchers, practitioners, and stakeholders involved in Arabic OCR, ultimately fostering advancements in the field and facilitating the creation of more accurate and efficient OCR systems for the Arabic language

    Handwritten OCR for Indic Scripts: A Comprehensive Overview of Machine Learning and Deep Learning Techniques

    Get PDF
    The potential uses of cursive optical character recognition, commonly known as OCR, in a number of industries, particularly document digitization, archiving, even language preservation, have attracted a lot of interest lately. In the framework of optical character recognition (OCR), the goal of this research is to provide a thorough understanding of both cutting-edge methods and the unique difficulties presented by Indic scripts. A thorough literature search was conducted in order to conduct this study, during which time relevant publications, conference proceedings, and scientific files were looked for up to the year 2023. As a consequence of the inclusion criteria that were developed to concentrate on studies only addressing Handwritten OCR on Indic scripts, 53 research publications were chosen as the process's outcome. The review provides a thorough analysis of the methodology and approaches employed in the chosen study. Deep neural networks, conventional feature-based methods, machine learning techniques, and hybrid systems have all been investigated as viable answers to the problem of effectively deciphering Indian scripts, because they are famously challenging to write. To operate, these systems require pre-processing techniques, segmentation schemes, and language models. The outcomes of this methodical examination demonstrate that despite the fact that Hand Scanning for Indic script has advanced significantly, room still exists for advancement. Future research could focus on developing trustworthy models that can handle a range of writing styles and enhance accuracy using less-studied Indic scripts. This profession may advance with the creation of collected datasets and defined standards

    Subword Recognition in Historical Arabic Documents using C-GRUs

    Get PDF
    The recent years have witnessed an increased tendency to digitize historical manuscripts that not only ensures the preservation of these collections but also allows researchers and end-users’ direct access to these images. Recognition of Arabic handwriting is challenging due to the highly cursive nature of the script and other challenges associated with historical documents (degradation etc.). This paper presents an end-to-end system to recognize Arabic handwritten sub words in historical documents. More specifically, we introduce a hybrid CNN-GRU model where the shallow convolutional network learns robust feature representations while the GRU layers carry out the sequence modelling and generate the transcription of the text. The proposed system is evaluated on two different datasets, IBN SINA and VML-HD reporting recognition rates of 96.10% and 98.60% respectively. A comparison with existing techniques evaluated on the same datasets validates the effectiveness of our proposed model in characterizing Arabic subwords

    Integrated multi-layer perceptron neural network and novel feature extraction for handwritten Arabic recognition

    Get PDF
    Arabic handwritten script recognition presents an energetic area of study. These types of recognitions face several obstacles, such as vast open databases, boundless diversity in individuals' penmanship, and freestyle writing. Thus, Arabic handwriting requires effective techniques to achieve better recognition results. On the other hand, Multilayer Perceptron (MLP) is one of the most common Artificial Neural Networks (ANNs) which deals with various problems efficiently. Therefore, this study introduces a new technique called Block Density and Location Feature (BDLF) with MLP, namely BDLF-MLP, which aims to extract novel features from letter images and estimate the letter's pixel density and its location for each equal-sized block in the image. In other words, BDLF-MLP can deal with various styles of Arabic handwritten, such as overlapping letters. The BDLF-MLP starts with the Block Feature Extraction (BFE) of the image by dividing the image into sixteen parts. After that, it calculates the density and location of each block (i.e., BDLF) by finding the sum of all values inside blocks. Finally, it determines the position of the greatest pixel density to obtain better recognition accuracy. The dataset containing 720 images is used to evaluate the efficiency of the proposed technique. Also, 1440 letters are used for training and testing divided evenly between them. The experiment results illustrate that BDLF-MLP outperformed the other algorithms in the literature with an accuracy of 97.26 %

    A New Approach to Synthetic Image Evaluation

    Get PDF
    This study is dedicated to enhancing the effectiveness of Optical Character Recognition (OCR) systems, with a special emphasis on Arabic handwritten digit recognition. The choice to focus on Arabic handwritten digits is twofold: first, there has been relatively less research conducted in this area compared to its English counterparts; second, the recognition of Arabic handwritten digits presents more challenges due to the inherent similarities between different Arabic digits.OCR systems, engineered to decipher both printed and handwritten text, often face difficulties in accurately identifying low-quality or distorted handwritten text. The quality of the input image and the complexity of the text significantly influence their performance. However, data augmentation strategies can notably improve these systems\u27 performance. These strategies generate new images that closely resemble the original ones, albeit with minor variations, thereby enriching the model\u27s learning and enhancing its adaptability. The research found Conditional Variational Autoencoders (C-VAE) and Conditional Generative Adversarial Networks (C-GAN) to be particularly effective in this context. These two generative models stand out due to their superior image generation and feature extraction capabilities. A significant contribution of the study has been the formulation of the Synthetic Image Evaluation Procedure, a systematic approach designed to evaluate and amplify the generative models\u27 image generation abilities. This procedure facilitates the extraction of meaningful features, computation of the Fréchet Inception Distance (LFID) score, and supports hyper-parameter optimization and model modifications

    A sequential handwriting recognition model based on a dynamically configurable CRNN

    Get PDF
    Handwriting recognition refers to recognizing a handwritten input that includes character(s) or digit(s) based on an image. Because most applications of handwriting recognition in real life contain sequential text in various languages, there is a need to develop a dynamic handwriting recognition system. Inspired by the neuroevolutionary technique, this paper proposes a Dynamically Configurable Convolutional Recurrent Neural Network (DC-CRNN) for the handwriting recognition sequence modeling task. The proposed DC-CRNN is based on the Salp Swarm Optimization Algorithm (SSA), which generates the optimal structure and hyperparameters for Convolutional Recurrent Neural Networks (CRNNs). In addition, we investigate two types of encoding techniques used to translate the output of optimization to a CRNN recognizer. Finally, we proposed a novel hybridized SSA with Late Acceptance Hill-Climbing (LAHC) to improve the exploitation process. We conducted our experiments on two well-known datasets, IAM and IFN/ENIT, which include both the Arabic and English languages. The experimental results have shown that LAHC significantly improves the SSA search process. Therefore, the proposed DC-CRNN outperforms the handcrafted CRNN methods

    Hybrid manifold smoothing and label propagation technique for Kannada handwritten character recognition

    Get PDF
    Handwritten character recognition is one of the classical problems in the field of image classification. Supervised learning techniques using deep learning models are highly effective in their application to handwritten character recognition. However, they require a large dataset of labeled samples to achieve good accuracies. Recent supervised learning techniques for Kannada handwritten character recognition have state of the art accuracy and perform well over a large range of input variations. In this work, a framework is proposed for the Kannada language that incorporates techniques from semi-supervised learning. The framework uses features extracted from a convolutional neural network backbone and uses regularization to improve the trained features and label propagation to classify previously unseen characters. The episodic learning framework is used to validate the framework. Twenty-four classes are used for pre-training, 12 classes are used for testing and 11 classes are used for validation. Fine-tuning is tested using one example per unseen class and five examples per unseen class. Through experimentation the components of the network are implemented in Python using the Pytorch library. It is shown that the accuracy obtained 99.13% make this framework competitive with the currently available supervised learning counterparts, despite the large reduction in the number of labeled samples available for the novel classes

    Advanced approach for Moroccan administrative documents digitization using pre-trained models CNN-based: character recognition

    Get PDF
    In the digital age, efficient digitization of administrative documents is a real challenge, particularly for languages with complex scripts such as those used in Moroccan documents. The subject matter of this article is the digitization of Moroccan administrative documents using pre-trained convolutional neural networks (CNNs) for advanced character recognition. This research aims to address the unique challenges of accurately digitizing various Moroccan scripts and layouts, which are crucial in the digital transformation of administrative processes. Our goal was to develop an efficient and highly accurate character recognition system specifically tailored for Moroccan administrative texts. The tasks involved comprehensive analysis and customization of pre-trained CNN models and rigorous performance testing against a diverse dataset of Moroccan administrative documents. The methodology entailed a detailed evaluation of different CNN architectures trained on a dataset representative of various types of characters used in Moroccan administrative documents. This ensured the adaptability of the models to real-world scenarios, with a focus on accuracy and efficiency in character recognition. The results were remarkable. DenseNet121 achieved a 95.78% accuracy rate on the Alphabet dataset, whereas VGG16 recorded a 99.24% accuracy on the Digits dataset. DenseNet169 demonstrated 94.00% accuracy on the Arabic dataset, 99.9% accuracy on the Tifinagh dataset, and 96.24% accuracy on the French Special Characters dataset. Furthermore, DenseNet169 attained 99.14% accuracy on the Symbols dataset. In addition, ResNet50 achieved 99.90% accuracy on the Character Type dataset, enabling accurate determination of the dataset to which a character belongs. In conclusion, this study signifies a substantial advancement in the field of Moroccan administrative document digitization. The CNN-based approach showcased in this study significantly outperforms traditional character recognition methods. These findings not only contribute to the digital processing and management of documents but also open new avenues for future research in adapting this technology to other languages and document types

    OTS: A One-shot Learning Approach for Text Spotting in Historical Manuscripts

    Full text link
    Historical manuscript processing poses challenges like limited annotated training data and novel class emergence. To address this, we propose a novel One-shot learning-based Text Spotting (OTS) approach that accurately and reliably spots novel characters with just one annotated support sample. Drawing inspiration from cognitive research, we introduce a spatial alignment module that finds, focuses on, and learns the most discriminative spatial regions in the query image based on one support image. Especially, since the low-resource spotting task often faces the problem of example imbalance, we propose a novel loss function called torus loss which can make the embedding space of distance metric more discriminative. Our approach is highly efficient and requires only a few training samples while exhibiting the remarkable ability to handle novel characters, and symbols. To enhance dataset diversity, a new manuscript dataset that contains the ancient Dongba hieroglyphics (DBH) is created. We conduct experiments on publicly available VML-HD, TKH, NC datasets, and the new proposed DBH dataset. The experimental results demonstrate that OTS outperforms the state-of-the-art methods in one-shot text spotting. Overall, our proposed method offers promising applications in the field of text spotting in historical manuscripts
    • …
    corecore