12 research outputs found

    Component-based Segmentation of words from handwritten Arabic text

    Get PDF
    Efficient preprocessing is very essential for automatic recognition of handwritten documents. In this paper, techniques on segmenting words in handwritten Arabic text are presented. Firstly, connected components (ccs) are extracted, and distances among different components are analyzed. The statistical distribution of this distance is then obtained to determine an optimal threshold for words segmentation. Meanwhile, an improved projection based method is also employed for baseline detection. The proposed method has been successfully tested on IFN/ENIT database consisting of 26459 Arabic words handwritten by 411 different writers, and the results were promising and very encouraging in more accurate detection of the baseline and segmentation of words for further recognition

    Unconstrained Scene Text and Video Text Recognition for Arabic Script

    Full text link
    Building robust recognizers for Arabic has always been challenging. We demonstrate the effectiveness of an end-to-end trainable CNN-RNN hybrid architecture in recognizing Arabic text in videos and natural scenes. We outperform previous state-of-the-art on two publicly available video text datasets - ALIF and ACTIV. For the scene text recognition task, we introduce a new Arabic scene text dataset and establish baseline results. For scripts like Arabic, a major challenge in developing robust recognizers is the lack of large quantity of annotated data. We overcome this by synthesising millions of Arabic text images from a large vocabulary of Arabic words and phrases. Our implementation is built on top of the model introduced here [37] which is proven quite effective for English scene text recognition. The model follows a segmentation-free, sequence to sequence transcription approach. The network transcribes a sequence of convolutional features from the input image to a sequence of target labels. This does away with the need for segmenting input image into constituent characters/glyphs, which is often difficult for Arabic script. Further, the ability of RNNs to model contextual dependencies yields superior recognition results.Comment: 5 page

    Handwritten character recognition using a gradient based feature extraction

    Full text link
    Handwriting Recognition is the task of transforming a language that is represented in its spatial form of graphical marks into its symbolic representation. In Offline Handwriting Recognition, there are three steps: preprocessing of the image, segmentation of words into characters and recognition of the characters. In this thesis I implemented two methods for character recognition, which is the most important step in Offline Handwriting Recognition. The heart of character recognition is the features that are extracted from the character image. The accuracy of the classification of the character image depends on the quality of the features extracted from the image. The two methods presented in this thesis use two different types of features. One uses the connectivity features among various segments in a character image, and the other method uses the gradient feature at each pixel to construct the feature vectors. Both these methods are discussed in detail in the following chapters

    Arabic handwriting recognition: Challenges and solutions

    Full text link

    Arabic handwriting recognition: Challenges and solutions

    Get PDF
    Optical Characters Recognition (OCR) is one of the active subjects of research since the early days of computer science.Even if Arabic characters are used by more than a half a billion people; Arabic characters recognition has not received enough interests by the researchers.Little research progress has been achieved comparing to what has been done with Latin and Chinese.The cursive nature of the Arabic characters makes it more difficult to achieve a high accuracy in character recognition since even printed Arabic characters are in cursive form.This paper presents the main challenges (difficulties) researchers are facing and up to dated solutions(the common methods) are used for Arabic text recognition

    Novel word recognition and word spotting systems for offline Urdu handwriting

    Get PDF
    Word recognition for offline Arabic, Farsi and Urdu handwriting is a subject which has attained much attention in the OCR field. This thesis presents the implementations of offline Urdu Handwritten Word Recognition (HWR) and an Urdu word spotting technique. This thesis first introduces the creation of several offline CENPARMI Urdu databases. These databases were necessary for offline Urdu HWR experiments. The holistic-based recognition approach was followed for the Urdu HWR system. In this system, the basic pre-processing of images was performed. In the feature extraction phase, the gradient and structural features were extracted from greyscale and binary word images, respectively. This recognition system extracted 592 feature sets and these features helped in improving the recognition results. The system was trained and tested on 57 words. Overall, we achieved a 97 % accuracy rate for handwritten word recognition by using the SVM classifier. Our word spotting technique used the holistic HWR system for recognition purposes. This word spotting system consisted of two processes: the segmentation of handwritten connected components and diacritics from Urdu text lines and the word spotting algorithm. A small database of handwritten text pages was created for testing the word spotting system. This database consisted of texts from ten Urdu native speakers. The rule-based segmentation system was applied for segmentation (or extracting) for handwritten Urdu subwords or connected components from text lines. We achieved a 92% correct segmentation rate for 372 text lines. In the word spotting algorithm, the candidate words were generated from the segmented connected components. These candidate words were sent to the holistic HWR system, which extracted the features and tried to recognize each image as one of the 57 words. After classification, each image was sent to the verification/rejection phase, which helped in rejecting the maximum number of unseen (raw data) images. Overall, we achieved a 50% word spotting precision at a 70% recall rat
    corecore