5 research outputs found

    Robust Ulcer Classification: Contrast and Illumination Invariant Approach

    No full text
    Gastrointestinal (GI) disease cases are on the rise throughout the world. Ulcers, being the most common type of GI disease, if left untreated, can cause internal bleeding resulting in anemia and bloody vomiting. Early detection and classification of different types of ulcers can reduce the death rate and severity of the disease. Manual detection and classification of ulcers are tedious and error-prone. This calls for automated systems based on computer vision techniques to detect and classify ulcers in images and video data. A major challenge in accurate detection and classification is dealing with the similarity among classes and the poor quality of input images. Improper contrast and illumination reduce the anticipated classification accuracy. In this paper, contrast and illumination invariance was achieved by utilizing log transformation and power law transformation. Optimal values of the parameters for both these techniques were achieved and combined to obtain the fused image dataset. Augmentation was used to handle overfitting and classification was performed using the lightweight and efficient deep learning model MobilNetv2. Experiments were conducted on the KVASIR dataset to assess the efficacy of the proposed approach. An accuracy of 96.71% was achieved, which is a considerable improvement over the state-of-the-art techniques

    Deep-COVID: Detection and Analysis of COVID-19 Outcomes Using Deep Learning

    No full text
    The coronavirus epidemic (COVID-19) is growing quickly around the globe. The first acute atypical respiratory illness was reported in December 2019, in Wuhan, China. This quickly spread from Wuhan city to other locations. Deep learning (DL) algorithms are one of the greatest solutions for consistently and readily recognizing COVID-19. Previously, many researchers used state-of-the-art approaches for the classification of COVID-19. In this paper, we present a deep learning approach with the EfficientnetB4 model, centered on transfer learning, for the classification of COVID-19. Transfer learning is a popular technique that uses pre-trained models that have been trained on the ImageNet database and employed on a new problem to increase generalization. We presented an in-depth training approach to extract the visual properties of COVID-19 in exchange for providing a medical assessment before infection testing. The proposed methodology is assessed on a publicly accessible X-ray imaging dataset. The proposed framework achieves an accuracy of 97%. Our model’s experimental findings demonstrate that it is extremely successful at identifying COVID-19 and that it may be supplied to health organizations as a precise, quick, and successful decision support system for COVID-19 identification

    Deep Feature Extraction for Detection of COVID-19 Using Deep Learning

    No full text
    SARS-CoV-2, a severe acute respiratory syndrome that is related to COVID-19, is a novel type of influenza virus that has infected the entire international community. It has created severe health and safety concerns all over the globe. Identifying the outbreak in the initial phase may aid successful recovery. The rapid and exact identification of COVID-19 limits the risk of spreading this fatal disease. Patients with COVID-19 have distinctive radiographic characteristics on chest X-rays and CT scans. CXR images can be used for people with COVID-19 to diagnose their disease early. This research was focused on the deep feature extraction, accurate detection, and prediction of COVID-19 from X-ray images. The proposed concatenated CNN model is based on deep learning models (Xception and ResNet101) for CXR images. For the extraction of features, CNN models (Xception and ResNet101) were utilized, and then these features were combined using a concatenated model technique. In the proposed scheme, the particle swarm optimization method is applied to the concatenated features that provide optimal features from an overall feature vector. The selection of these optimal features helps to decrease the classification period. To evaluate the performance of the proposed approach, experiments were conducted with CXR images. Datasets of CXR images were collected from three different sources. The results demonstrated the efficiency of the proposed scheme for detecting COVID-19 with average accuracies of 99.77%, 99.72%, and 99.73% for datasets 1, 2 and 3, respectively. Moreover, the proposed model also achieved average COVID-19 sensitivities of 96.6%, 97.18%, and 98.88% for datasets 1, 2, and 3, respectively. The maximum overall accuracy of all classes—normal, pneumonia, and COVID-19—was about 98.02%

    Deep Learning for Sarcasm Identification in News Headlines

    No full text
    Sarcasm is a mode of expression whereby individuals communicate their positive or negative sentiments through words contrary to their intent. This communication style is prevalent in news headlines and social media platforms, making it increasingly challenging for individuals to detect sarcasm accurately. To mitigate this challenge, developing an intelligent system that can detect sarcasm in headlines and news is imperative. This research paper proposes a deep learning architecture-based model for sarcasm identification in news headlines. The proposed model has three main objectives: (1) to comprehend the original meaning of the text or headlines, (2) to learn the nature of sarcasm, and (3) to detect sarcasm in the text or headlines. Previous studies on sarcasm detection have utilized datasets of tweets and employed hashtags to differentiate between ordinary and sarcastic tweets depending on the limited dataset. However, these datasets were prone to noise regarding language and tags. In contrast, using multiple datasets in this study provides a comprehensive understanding of sarcasm detection in online communication. By incorporating different types of sarcasm from the Sarcasm Corpus V2 from Baskin Engineering and sarcastic news headlines from The Onion and HuffPost, the study aims to develop a model that can generalize well across different contexts. The proposed model uses LSTM to capture temporal dependencies, while the proposed model utilizes a GlobalMaxPool1D layer for better feature extraction. The model was evaluated on training and test data with an accuracy score of 0.999 and 0.925, respectively

    Hybrid Facial Emotion Recognition Using CNN-Based Features

    No full text
    In computer vision, the convolutional neural network (CNN) is a very popular model used for emotion recognition. It has been successfully applied to detect various objects in digital images with remarkable accuracy. In this paper, we extracted learned features from a pre-trained CNN and evaluated different machine learning (ML) algorithms to perform classification. Our research looks at the impact of replacing the standard SoftMax classifier with other ML algorithms by applying them to the FC6, FC7, and FC8 layers of Deep Convolutional Neural Networks (DCNNs). Experiments were conducted on two well-known CNN architectures, AlexNet and VGG-16, using a dataset of masked facial expressions (MLF-W-FER dataset). The results of our experiments demonstrate that Support Vector Machine (SVM) and Ensemble classifiers outperform the SoftMax classifier on both AlexNet and VGG-16 architectures. These algorithms were able to achieve improved accuracy of between 7% and 9% on each layer, suggesting that replacing the classifier in each layer of a DCNN with SVM or ensemble classifiers can be an efficient method for enhancing image classification performance. Overall, our research demonstrates the potential for combining the strengths of CNNs and other machine learning (ML) algorithms to achieve better results in emotion recognition tasks. By extracting learned features from pre-trained CNNs and applying a variety of classifiers, we provide a framework for investigating alternative methods to improve the accuracy of image classification
    corecore