3 research outputs found

    Malayalam Handwritten Character Recognition using CNN Architecture

    Get PDF
    The process of encoding an input text image into a machine-readable format is called optical character recognition (OCR). The difference in characteristics of each language makes it difficult to develop a universal method that will have high accuracy for all languages. A method that produces good results for one language may not necessarily produce the same results for another language. OCR for printed characters is easier than handwritten characters because of the uniformity that exists in printed characters. While conventional methods find it hard to improve the existing methods, Convolutional Neural Networks (CNN) has shown drastic improvement in classification and recognition of other languages. However, there is no OCR model using CNN for Malayalam characters. Our proposed system uses a new CNN architecture for feature extraction and softmax layer for classification of characters. This eliminates manual designing of features that is used in the conventional methods. P-ARTS Kayyezhuthu dataset is used for training the CNN and an accuracy of 99.75% is obtained for the testing dataset meanwhile a collection of 40 real time input images yielded an accuracy of 95%

    Residual U-Net approach for thyroid nodule detection and classification from thyroid ultrasound images

    No full text
    With so many thyroid knobs (nodules) discovered by accident, it is critical to recognize as many aberrant knobs (nodules) as possible from fine-needle aspiration (FNA) biopsies or other medical procedures while excluding those that are virtually certainly benign. Thyroid ultrasonography, on the other hand, is prone to interobserver variability and subjective translations. An effective deep learning model for segmenting and categorizing thyroid nodules in this study follows the stages below: data collection from a well-known archive, The Thyroid Digital Image Database (TDID), which comprises ultrasound pictures from 298 patients, preprocessing using anisotropic diffusion filter (ADF) for removing noise and enhancing the images, segmentation using a bilateral filter for segmenting images, feature extraction using grey level occurrence matrix (GLCM), feature selection using Multi-objective Particle Swarm with Random Forest Optimization (MbPSRA) and finally classification happens were Residual U-Net will be used. Experiment evaluation states the proposed model outperforms well than other state-of-art models
    corecore