46,367 research outputs found

    Moment invariant-based features for Jawi character recognition

    Get PDF
    Ancient manuscripts written in Malay-Arabic characters, which are known as "Jawi" characters, are mostly found in Malay world. Nowadays, many of the manuscripts have been digitalized. Unlike Roman letters, there is no optical character recognition (OCR) software for Jawi characters. This article proposes a new algorithm for Jawi character recognition based on Hu’s moment as an invariant feature that we call the tree root (TR) algorithm. The TR algorithm allows every Jawi character to have a unique combination of moment. Seven values of the Hu’s moment are calculated from all Jawi characters, which consist of 36 isolated, 27 initial, 27 middle, and 35 end characters; this makes a total of 125 characters. The TR algorithm was then applied to recognize these characters. To assess the TR algorithm, five characters that had been rotated to 90o and 180o and scaled with factors of 0.5 and 2 were used. Overall, the recognition rate of the TR algorithm was 90.4%; 113 out of 125 characters have a unique combination of moment values, while testing on rotated and scaled characters achieved 82.14% recognition rate. The proposed method showed a superior performance compared with the Support Vector Machine and Euclidian Distance as classifier

    Jawi character recognition using the trace transform

    Get PDF
    The Trace transform, a generalisation of the Radon transform, allows one to construct image features that are invariant to a chosen group of image transformations. In this paper, we used some features, which are invariant to affine distortion, generated by the Trace transform to discriminate between Jawi characters. The process consists of tracing an image with straight lines, along which certain functionals of the image function are calculated, in all possible orientations. For each combination of functionals we derived a function of orientation of the tracing lines that is known as an object signature. If the functionals used have some predefined properties, this signature can be used to characterise the character in an affine way. We demonstrated the usefulness of the derived signature and compared the result of character recognition with those obtained by using features based on affine moment invariants. Experiments using the Trace transform produced decent results for the printed and handwritten Jawi character recognitions that are invariant to affine distortion.Keyword: Affine moment invariant; Jawi character recognition; trace transfor

    Recognizing Degraded Handwritten Characters

    Get PDF
    In this paper, Slavonic manuscripts from the 11th century written in Glagolitic script are investigated. State-of-the-art optical character recognition methods produce poor results for degraded handwritten document images. This is largely due to a lack of suitable results from basic pre-processing steps such as binarization and image segmentation. Therefore, a new, binarization-free approach will be presented that is independent of pre-processing deficiencies. It additionally incorporates local information in order to recognize also fragmented or faded characters. The proposed algorithm consists of two steps: character classification and character localization. Firstly scale invariant feature transform features are extracted and classified using support vector machines. On this basis interest points are clustered according to their spatial information. Then, characters are localized and eventually recognized by a weighted voting scheme of pre-classified local descriptors. Preliminary results show that the proposed system can handle highly degraded manuscript images with background noise, e.g. stains, tears, and faded characters

    Matching Local Invariant Features with Contextual Information : an Experimental Evaluation

    Get PDF
    The main advantage of using local invariant features is their local character which yields robustness to occlusion and varying background. Therefore, local features have proved to be a powerful tool for finding correspondences between images, and have been employed in many applications. However, the local character limits the descriptive capability of features descriptors, and local features fail to resolve ambiguities that can occur when an image shows multiple similar regions. Considering some global information will clearly help to achieve better performances. The question is which information to use and how to use it. Context can be used to enrich the description of the features, or used in the matching step to filter out mismatches. In this paper, we compare different recent methods which use context for matching and show that better results are obtained if contextual information is used during the matching process. We evaluate the methods in two applications: wide baseline matching and object recognition, and it appears that a relaxation based approach gives the best results

    SAFE: Scale Aware Feature Encoder for Scene Text Recognition

    Full text link
    In this paper, we address the problem of having characters with different scales in scene text recognition. We propose a novel scale aware feature encoder (SAFE) that is designed specifically for encoding characters with different scales. SAFE is composed of a multi-scale convolutional encoder and a scale attention network. The multi-scale convolutional encoder targets at extracting character features under multiple scales, and the scale attention network is responsible for selecting features from the most relevant scale(s). SAFE has two main advantages over the traditional single-CNN encoder used in current state-of-the-art text recognizers. First, it explicitly tackles the scale problem by extracting scale-invariant features from the characters. This allows the recognizer to put more effort in handling other challenges in scene text recognition, like those caused by view distortion and poor image quality. Second, it can transfer the learning of feature encoding across different character scales. This is particularly important when the training set has a very unbalanced distribution of character scales, as training with such a dataset will make the encoder biased towards extracting features from the predominant scale. To evaluate the effectiveness of SAFE, we design a simple text recognizer named scale-spatial attention network (S-SAN) that employs SAFE as its feature encoder, and carry out experiments on six public benchmarks. Experimental results demonstrate that S-SAN can achieve state-of-the-art (or, in some cases, extremely competitive) performance without any post-processing.Comment: ACCV201
    • …
    corecore