9 research outputs found

    Hanwrittent Text Recognition for Bengali

    Full text link
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Handwritten text recognition of Bengali is a difficult task because of complex character shapes due to the presence of modified/compound characters as well as zone-wise writing styles of different individuals. Most of the research published so far on Bengali handwriting recognition deals with either isolated character recognition or isolated word recognition, and just a few papers have researched on recognition of continuous handwritten Bengali. In this paper we present a research on continuous handwritten Bengali. We follow a classical line-based recognition approach with a system based on hidden Markov models and n-gram language models. These models are trained with automatic methods from annotated data. We research both on the maximum likelihood approach and the minimum error phone approach for training the optical models. We also research on the use of word-based language models and characterbased language models. This last approach allow us to deal with the out-of-vocabulary word problem in the test when the training set is of limited size. From the experiments we obtained encouraging results.This work has been partially supported through the European Union’s H2020 grant READ (Recognition and Enrichment of Archival Documents) (Ref: 674943) and partially supported by MINECO/FEDER, UE under project TIN2015-70924-C2-1-R.Sánchez Peiró, JA.; Pal, U. (2016). Hanwrittent Text Recognition for Bengali. IEEE. https://doi.org/10.1109/ICFHR.2016.010

    A New Feature Extraction Method for TMNN-Based Arabic Character Classification

    Get PDF
    This paper describes a hybrid method of typewritten Arabic character recognition by Toeplitz Matrices and Neural Networks (TMNN) applying a new technique for feature selecting and data mining. The suggested algorithm reduces the NN input data to only the most significant and essential-for-classification points. Four items are determined to resemble the distribution percentage of the essential feature points in each part of the extracted character image. Feature points are detected depending on a designed algorithm for this aim. This algorithm is of high performance and is intelligent enough to define the most significant points which satisfy the sufficient conditions to recognize almost all written fonts of Arabic characters. The number of essential feature points is reduced by at least 88 %. Calculations and data size are then consequently decreased in a high percentage. The authors achieved a recognition rate of 97.61 %. The obtained results have proved high accuracy, high speed and powerful classification

    Recognition of Cursive Arabic Handwritten Text using Embedded Training based on HMMs

    Get PDF
    In this paper we present a system for offline recognition cursive Arabic handwritten text based on Hidden Markov Models (HMMs). The system is analytical without explicit segmentation used embedded training to perform and enhance the character models. Extraction features preceded by baseline estimation are statistical and geometric to integrate both the peculiarities of the text and the pixel distribution characteristics in the word image. These features are modelled using hidden Markov models and trained by embedded training. The experiments on images of the benchmark IFN/ENIT database show that the proposed system improves recognition

    Deep Learning for Scene Text Detection, Recognition, and Understanding

    Get PDF
    Detecting and recognizing texts in images is a long-standing task in computer vision. The goal of this task is to extract textual information from images and videos, such as recognizing license plates. Despite that the great progresses have been made in recent years, it still remains challenging due to the wide range of variations in text appearance. In this thesis, we aim to review the existing issues that hinder current Optical Character Recognition (OCR) development and explore potential solutions. Specifically, we first investigate the phenomenon of unfair comparisons between different OCR algorithms caused due to the lack of a consistent evaluation framework. Such an absence of a unified evaluation protocol leads to inconsistent and unreliable results, making it difficult to compare and improve upon existing methods. To tackle this issue, we design a new evaluation framework from the aspect of datasets, metrics, and models, enabling consistent and fair comparisons between OCR systems. Another issue existing in the field is the imbalanced distribution of training samples. In particular, the sample distribution largely depended on where and how the data was collected, and the resulting data bias may lead to poor performance and low generalizability on under-represented classes. To address this problem, we took the driving license plate recognition task as an example and proposed a text-to-image model that is able to synthesize photo-realistic text samples. By using this model, we synthesized more than one million samples to augment the training dataset, significantly improving the generalization capability of OCR models. Additionally, this thesis also explores the application of text vision question answering, which is a new and emerging research topic among the OCR community. This task challenges the OCR models to understand the relationships between the text and backgrounds and to answer the given questions. In this thesis, we propose to investigate evidence-based text VQA, which involves designing models that can provide reasonable evidence for their predictions, thus improving the generalization ability.Thesis (Ph.D.) -- University of Adelaide, School of Computer and Mathematical Sciences, 202
    corecore