214 research outputs found
A study of feature extraction for Arabic calligraphy characters recognition
Optical character recognition (OCR) is one of the widely used pattern recognition systems. However, the research on ancient Arabic writing recognition has suffered from a lack of interest for decades, despite the availability of thousands of historical documents. One of the reasons for this lack of interest is the absence of a standard dataset, which is fundamental for building and evaluating an OCR system. In 2022, we published a database of ancient Arabic words as the only public dataset of characters written in Al-Mojawhar Moroccan calligraphy. Therefore, such a database needs to be studied and evaluated. In this paper, we explored the proposed database and investigated the recognition of Al-Mojawhar Arabic characters. We studied feature extraction by using the most popular descriptors used in Arabic OCR. The studied descriptors were associated with different machine learning classifiers to build recognition models and verify their performance. In order to compare the learned and handcrafted features on the proposed dataset, we proposed a deep convolutional neural network for character recognition. Regarding the complexity of the character shapes, the results obtained were very promising, especially by using the convolutional neural network model, which gave the highest accuracy score
Multi-Classifier Jawi Handwritten Sub-Word Recognition
The problems and challenges in Jawi handwritten recognition are inherited from Arabic script which consists of cursive natures, large variety of writing styles due to its morphologically rich, ligature, overlapping characters, dialects and the low quality of the manuscripts images. The word segmentation is difficult because the existence of sub words due to the presence of space within words when contain disconnect characters. The performance of previous Jawi handwritten recognition still consider sub-par. There are three main problem of previous approach. First, the recognizer consist of multiple independent components where the improvement of performance in one component not shared across the systems. Secondly, the features extraction using features engineering approach only works on specific subsets of training data and is less capable to handle broader variants of testing data. Finally, the classifier used implicit segmentation where target class is sub-word with limited lexicon. This paper propose use of Deep Learning approach to address the first problem where training is conducted end-to-end from input to class output which enable the improvement of each component to improve overall performance. Secondly, Convolutional Network is use as learning features optimizes the data representation through end-to-end training of the parameters from raw input data to target class. Finally, A multi-classifier implicitly segments the sub-word into sequences of characters are proposed. The classifiers consists of one sub-word length classifier and seven character classifiers. This approach is lexicon-free to address absent of lexicon data. Experiments conducted on a Jawi handwritten standard dataset showed an accuracy of up to 92.20% and suggest that the approach used is superior to state-of-the-art methods of Jawi handwriting recognition
MCCFNet: multi-channel color fusion network for cognitive classification of traditional Chinese paintings.
The computational modeling and analysis of traditional Chinese painting rely heavily on cognitive classification based on visual perception. This approach is crucial for understanding and identifying artworks created by different artists. However, the effective integration of visual perception into artificial intelligence (AI) models remains largely unexplored. Additionally, the classification research of Chinese painting faces certain challenges, such as insufficient investigation into the specific characteristics of painting images for author classification and recognition. To address these issues, we propose a novel framework called multi-channel color fusion network (MCCFNet), which aims to extract visual features from diverse color perspectives. By considering multiple color channels, MCCFNet enhances the ability of AI models to capture intricate details and nuances present in Chinese painting. To improve the performance of the DenseNet model, we introduce a regional weighted pooling (RWP) strategy specifically designed for the DenseNet169 architecture. This strategy enhances the extraction of highly discriminative features. In our experimental evaluation, we comprehensively compared the performance of our proposed MCCFNet model against six state-of-the-art models. The comparison was conducted on a dataset consisting of 2436 TCP samples, derived from the works of 10 renowned Chinese artists. The evaluation metrics employed for performance assessment were Top-1 Accuracy and the area under the curve (AUC). The experimental results have shown that our proposed MCCFNet model significantly outperform all other benchmarking methods with the highest classification accuracy of 98.68%. Meanwhile, the classification accuracy of any deep learning models on TCP can be much improved when adopting our proposed framework
Off-line Arabic Handwriting Recognition System Using Fast Wavelet Transform
In this research, off-line handwriting recognition system for Arabic alphabet is
introduced. The system contains three main stages: preprocessing, segmentation and
recognition stage. In the preprocessing stage, Radon transform was used in the design
of algorithms for page, line and word skew correction as well as for word slant
correction. In the segmentation stage, Hough transform approach was used for line
extraction. For line to words and word to characters segmentation, a statistical method
using mathematic representation of the lines and words binary image was used.
Unlike most of current handwriting recognition system, our system simulates the
human mechanism for image recognition, where images are encoded and saved in
memory as groups according to their similarity to each other. Characters are
decomposed into a coefficient vectors, using fast wavelet transform, then, vectors,
that represent a character in different possible shapes, are saved as groups with one
representative for each group. The recognition is achieved by comparing a vector of
the character to be recognized with group representatives.
Experiments showed that the proposed system is able to achieve the recognition task
with 90.26% of accuracy. The system needs only 3.41 seconds a most to recognize a
single character in a text of 15 lines where each line has 10 words on average
A new representation for matching words
Ankara : The Department of Computer Engineering and the Institute of Engineering and Sciences of Bilkent University, 2007.Thesis (Master's) -- Bilkent University, 2007.Includes bibliographical references leaves 77-82.Large archives of historical documents are challenging to many researchers all
over the world. However, these archives remain inaccessible since manual indexing
and transcription of such a huge volume is difficult. In addition, electronic
imaging tools and image processing techniques gain importance with the rapid
increase in digitalization of materials in libraries and archives. In this thesis,
a language independent method is proposed for representation of word images,
which leads to retrieval and indexing of documents. While character recognition
methods suffer from preprocessing and overtraining, we make use of another
method, which is based on extracting words from documents and representing
each word image with the features of invariant regions. The bag-of-words approach,
which is shown to be successful to classify objects and scenes, is adapted
for matching words. Since the curvature or connection points, or the dots are
important visual features to distinct two words from each other, we make use of
the salient points which are shown to be successful in representing such distinctive
areas and heavily used for matching. Difference of Gaussian (DoG) detector,
which is able to find scale invariant regions, and Harris Affine detector, which
detects affine invariant regions, are used for detection of such areas and detected
keypoints are described with Scale Invariant Feature Transform (SIFT) features.
Then, each word image is represented by a set of visual terms which are obtained
by vector quantization of SIFT descriptors and similar words are matched based
on the similarity of these representations by using different distance measures.
These representations are used both for document retrieval and word spotting.
The experiments are carried out on Arabic, Latin and Ottoman datasets,
which included different writing styles and different writers. The results show that
the proposed method is successful on retrieval and indexing of documents even if
with different scripts and different writers and since it is language independent, it can be easily adapted to other languages as well. Retrieval performance of the
system is comparable to the state of the art methods in this field. In addition,
the system is succesfull on capturing semantic similarities, which is useful for
indexing, and it does not include any supervising step.Ataer, EsraM.S
Handwritten Text Generation from Visual Archetypes
Generating synthetic images of handwritten text in a writer-specific style is
a challenging task, especially in the case of unseen styles and new words, and
even more when these latter contain characters that are rarely encountered
during training. While emulating a writer's style has been recently addressed
by generative models, the generalization towards rare characters has been
disregarded. In this work, we devise a Transformer-based model for Few-Shot
styled handwritten text generation and focus on obtaining a robust and
informative representation of both the text and the style. In particular, we
propose a novel representation of the textual content as a sequence of dense
vectors obtained from images of symbols written as standard GNU Unifont glyphs,
which can be considered their visual archetypes. This strategy is more suitable
for generating characters that, despite having been seen rarely during
training, possibly share visual details with the frequently observed ones. As
for the style, we obtain a robust representation of unseen writers' calligraphy
by exploiting specific pre-training on a large synthetic dataset. Quantitative
and qualitative results demonstrate the effectiveness of our proposal in
generating words in unseen styles and with rare characters more faithfully than
existing approaches relying on independent one-hot encodings of the characters.Comment: Accepted at CVPR202
- …