54,727 research outputs found

    Recognizing Pneumonia Infection in Chest X-Ray Using Deep Learning

    Get PDF
    One of the diseases that attacks the lungs is pneumonia. Pneumonia is inflammation and fluid in the lungs making it difficult to breathe. This disease is diagnosed using X-Ray. Against the darker background of the lungs, infected tissue shows denser areas, which causes them to appear as white spots called infiltrates. In the image processing approach, pneumonia-infected X-rays can be detected using machine learning as well as deep learning. The convolutional neural network model is able to recognize images well and focus on points that are invisible to the human eye. Previous research using a convolutional neural network model with 10 convolution layers and 6 convolution layers has not achieved optimal accuracy. The aim of this research is to develop a convolutional neural network with a simpler architecture, namely two convolution layers and three convolution layers to solve the same problem, as well as examining the combination of various hyperparameter sizes and regularization techniques. We need to know which convolutional neural network architecture is better. As a result, the convolutional neural network classification model can recognize chest x-rays infected with pneumonia very well. The best classification model obtained an average accuracy of 89.743% with a three-layer convolution architecture, batch size 32, L2 regularization 0.0001, and dropout 0.2. The precision reached 94.091%, recall 86.456%, f1-score 89.601%, specificity 85.491, and error rate 10.257%. Based on the results obtained, convolutional neural network models have the potential to diagnose pneumonia and other diseases

    Compact & Capable: Harnessing Graph Neural Networks and Edge Convolution for Medical Image Classification

    Full text link
    Graph-based neural network models are gaining traction in the field of representation learning due to their ability to uncover latent topological relationships between entities that are otherwise challenging to identify. These models have been employed across a diverse range of domains, encompassing drug discovery, protein interactions, semantic segmentation, and fluid dynamics research. In this study, we investigate the potential of Graph Neural Networks (GNNs) for medical image classification. We introduce a novel model that combines GNNs and edge convolution, leveraging the interconnectedness of RGB channel feature values to strongly represent connections between crucial graph nodes. Our proposed model not only performs on par with state-of-the-art Deep Neural Networks (DNNs) but does so with 1000 times fewer parameters, resulting in reduced training time and data requirements. We compare our Graph Convolutional Neural Network (GCNN) to pre-trained DNNs for classifying MedMNIST dataset classes, revealing promising prospects for GNNs in medical image analysis. Our results also encourage further exploration of advanced graph-based models such as Graph Attention Networks (GAT) and Graph Auto-Encoders in the medical imaging domain. The proposed model yields more reliable, interpretable, and accurate outcomes for tasks like semantic segmentation and image classification compared to simpler GCNN

    Improvement of Convolutional Neural Network Accuracy on Salak Classification Based Quality on Digital Image

    Get PDF
    Salak is a seasonal fruit that has high export value. The success of salak fruit exported is influence by selection process, but there is still a problem in it. The selection of salak still done manually and potentially misclassified. Research to automate the selection of salak fruit has been done before. The process of selection this salak fruits used convolutional neural network (CNN) based on image of salak fruits. The resulting of accuracy value from previous research is 70.7% for four class classification model and 81.45% for two class classification model. This research was conducted to increase accuracy value the classification of salak exported based on previous research. Accuracy improvement by changing the noise removal process to produce a better image. The changing also occur in the CNN architecture that layer convolution is more deep and with additional parameters such as Stride, Zero Padding, and Adam Optimizer. This change hopefully can increase the accuracy value of the salak classification. The results showed an accuracy value increased 22.72% from 70.70% to 93.42% for the category of four classes CNN models and increased 13,29% from 81.45% to 94.74% for category two classes

    Real-Time Deep Learning-Based Face Recognition System

    Get PDF
    This research proposes Real-time Deep Learning-based Face recognition algorithms using MATLAB and Python. Generally, Face recognition is defined as the process through which people are identified using facial images. This technology is applied broadly in biometrics, security information, accessing controlled areas, etc. The facial recognition system can be built by following two steps. In the first step, the facial features are picked up or extracted, then the second step involves pattern classification. Deep learning, specifically the convolutional neural network (CNN), has recently made more progress in face recognition technology. Convolution Neural Network is one among the Deep Learning approaches and has shown excellent performance in many fields, such as image recognition of a large amount of training data (such as ImageNet). However, due to hardware limitations and insufficient training datasets, high performance is not achieved. Therefore, in this work, the Transfer Learning method is used to improve the performance of the face-recognition system even for a smaller number of images. For this, two pre-trained models, namely, GoogLeNet CNN (in MATLAB) and FaceNet (in Python) are used. Transfer learning is used to perform fine-tuning on the last layer of CNN model for new classification tasks. FaceNet presents a unified system for face verification (is this the same person?), recognition (who is this person?) and clustering (finds common people among these faces) using the method based on learning a Euclidean embedding per image using a deep convolutional network
    • …
    corecore