8 research outputs found

    Malayalam Handwritten Character Recognition Using AlexNet Based Architecture

    Get PDF
    This research article proposes a new handwritten Malayalam character recognition model based on AlexNet based architecture. The Malayalam language consists of a variety of characters having similar features, thus, differentiating characters is a challenging task. A lot of handcrafted feature extraction methods have been used for the classification of Malayalam characters. Convolutional Neural Networks (CNN) is one of the popular methods used in image and language recognition. AlexNet based CNN is proposed for feature extraction of basic and compound Malayalam characters. Furthermore, Support Vector Machine (SVM) is used for classification of the Malayalam characters. The 44 primary and 36 compound Malayalam characters are recognised with better accuracy and achieved minimal time consumption using this model. A dataset consisting of about 180,000 characters is used for training and testing purposes. This proposed model produces an efficiency of 98% with the dataset. Further, a dataset for Malayalam characters is developed in this research work and shared on Interne

    A Novel Deep Convolutional Neural Network Architecture Based on Transfer Learning for Handwritten Urdu Character Recognition

    Get PDF
    Deep convolutional neural networks (CNN) have made a huge impact on computer vision and set the state-of-the-art in providing extremely definite classification results. For character recognition, where the training images are usually inadequate, mostly transfer learning of pre-trained CNN is often utilized. In this paper, we propose a novel deep convolutional neural network for handwritten Urdu character recognition by transfer learning three pre-trained CNN models. We fine-tuned the layers of these pre-trained CNNs so as to extract features considering both global and local details of the Urdu character structure. The extracted features from the three CNN models are concatenated to train with two fully connected layers for classification. The experiment is conducted on UNHD, EMILLE, DBAHCL, and CDB/Farsi dataset, and we achieve 97.18% average recognition accuracy which outperforms the individual CNNs and numerous conventional classification methods

    Deep Learning Based Models for Offline Gurmukhi Handwritten Character and Numeral Recognition

    Get PDF
    Over the last few years, several researchers have worked on handwritten character recognition and have proposed various techniques to improve the performance of Indic and non-Indic scripts recognition. Here, a Deep Convolutional Neural Network has been proposed that learns deep features for offline Gurmukhi handwritten character and numeral recognition (HCNR). The proposed network works efficiently for training as well as testing and exhibits a good recognition performance. Two primary datasets comprising of offline handwritten Gurmukhi characters and Gurmukhi numerals have been employed in the present work. The testing accuracies achieved using the proposed network is 98.5% for characters and 98.6% for numerals

    A new hybrid convolutional neural network and eXtreme gradient boosting classifier for recognizing handwritten Ethiopian characters

    Get PDF
    Handwritten character recognition has been profoundly studied for many years in the field of pattern recognition. Due to its vast practical applications and financial implications, handwritten character recognition is still an important research area. In this research, the Handwritten Ethiopian Character Recognition (HECR) dataset has been prepared to train the model. The images in the HECR dataset were organized with more than one color pen RGB main spaces that have been size normalized to 28 × 28 pixels. The dataset is a combination of scripts (Fidel in Ethiopia), numerical representations, punctuations, tonal symbols, combining symbols, and special characters. These scripts have been used to write ancient histories, science, and arts of Ethiopia and Eritrea. In this study, a hybrid model of two super classifiers: Convolutional Neural Network (CNN) and eXtreme Gradient Boosting (XGBoost) is proposed for classification. In this integrated model, CNN works as a trainable automatic feature extractor from the raw images and XGBoost takes the extracted features as an input for recognition and classification. The output error rates of the hybrid model and CNN with a fully connected layer are compared. A 0.4630 and 0.1612 error rates are achieved in classifying the handwritten testing dataset images, respectively. Thus XGBoost as a classifier performs a better result than the traditional fully connected layer

    Разработка алгоритмов машинного обучения для автоматического описания рентгеновских изображений

    Get PDF
    Цель работы: реализация нескольких алгоритмов машинного обучения для автоматического анализа рентгеновских изображений, сравнение результатов их работы между собой и с аналогами. В результате проведённых экспериментов было проведено сравнение двух типов архитектур, 4 архитектур, одного ансамбля нейронных сетей. Был оценен эффект трёх различных способов предобработки изображений на результаты классификации и выделяемые на изображениях признаки. Было произведено сравнение обученных нейронных сетей между собой и с аналогами. Был произведён анализ наиболее распространённого набора данных рентгеновских снимков грудной клетки, сформулированы его недостатки. Были сформулированы рекомендации к дальнейшему улучшению работы аналогичных систем.The research objective is to develop the system for automatic analysis of X-ray images and evaluate the results of its work. A comparison was made of two types of architectures, 4 architectures, one ensemble of neural networks. Image preprocessing was tested whish using fast Fourier transform, random image inversion, converting to a heatmap. The training of classifiers was carried out both with randomly initialized weights and in various versions of transfer learning. For all classifiers, an analysis of the work was carried out using class activation maps. System errors in the dataset and the problems of detecting insignificant features by neural networks were described. Was formulated recommendations for creating systems of automatic analysis of X-ray image

    Data-Efficient Machine Learning with Focus on Transfer Learning

    Get PDF
    Machine learning (ML) has attracted a significant amount of attention from the artifi- cial intelligence community. ML has shown state-of-art performance in various fields, such as signal processing, healthcare system, and natural language processing (NLP). However, most conventional ML algorithms suffer from three significant difficulties: 1) insufficient high-quality training data, 2) costly training process, and 3) domain dis- crepancy. Therefore, it is important to develop solutions for these problems, so the future of ML will be more sustainable. Recently, a new concept, data-efficient ma- chine learning (DEML), has been proposed to deal with the current bottlenecks of ML. Moreover, transfer learning (TL) has been considered as an effective solution to address the three shortcomings of conventional ML. Furthermore, TL is one of the most active areas in the DEML. Over the past ten years, significant progress has been made in TL. In this dissertation, I propose to address the three problems by developing a software- oriented framework and TL algorithms. Firstly, I introduce a DEML framework and a evaluation system. Moreover, I present two novel TL algorithms and applications on real-world problems. Furthermore, I will first present the first well-defined DEML framework and introduce how it can address the challenges in ML. After that, I will give an updated overview of the state-of-the-art and open challenges in the TL. I will then introduce two novel algorithms for two of the most challenging TL topics: distant domain TL and cross-modality TL (image-text). A detailed algorithm introduction and preliminary results on real-world applications (Covid-19 diagnosis and image clas- sification) will be presented. Then, I will discuss the current trends in TL algorithms and real-world applications. Lastly, I will present the conclusion and future research directions
    corecore