586 research outputs found

    Lung nodule classification utilizing support vector machines

    Get PDF
    Lung cancer is one of the deadly and most common diseases in the world. Radiologists fail to diagnose small pulmonary nodules in as many as 30% of positive cases. Many methods have been proposed in the literature such as neural network algorithms. Recently, support vector machines (SVMs) had received increasing attention for pattern recognition. The advantage of SVM lies in better modeling the recognition process. The objective of this paper is to apply support vector machines SVMs for classification of lung nodules. The SVM classifier is trained with features extracted from 30 nodule images and 20 non-nodule images, and is tested with features out of 16 nodule/non-nodule images. The sensitivity of SVM classifier is found to be 87.5%. We intend to automate the pre-processing detection process to further enhance the overall classification

    Automatic Pulmonary Nodule Detection in CT Scans Using Convolutional Neural Networks Based on Maximum Intensity Projection

    Get PDF
    Accurate pulmonary nodule detection is a crucial step in lung cancer screening. Computer-aided detection (CAD) systems are not routinely used by radiologists for pulmonary nodule detection in clinical practice despite their potential benefits. Maximum intensity projection (MIP) images improve the detection of pulmonary nodules in radiological evaluation with computed tomography (CT) scans. Inspired by the clinical methodology of radiologists, we aim to explore the feasibility of applying MIP images to improve the effectiveness of automatic lung nodule detection using convolutional neural networks (CNNs). We propose a CNN-based approach that takes MIP images of different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices as input. Such an approach augments the two-dimensional (2-D) CT slice images with more representative spatial information that helps discriminate nodules from vessels through their morphologies. Our proposed method achieves sensitivity of 92.67% with 1 false positive per scan and sensitivity of 94.19% with 2 false positives per scan for lung nodule detection on 888 scans in the LIDC-IDRI dataset. The use of thick MIP images helps the detection of small pulmonary nodules (3 mm-10 mm) and results in fewer false positives. Experimental results show that utilizing MIP images can increase the sensitivity and lower the number of false positives, which demonstrates the effectiveness and significance of the proposed MIP-based CNNs framework for automatic pulmonary nodule detection in CT scans. The proposed method also shows the potential that CNNs could gain benefits for nodule detection by combining the clinical procedure.Comment: Submitted to IEEE TM

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    A Comparative Study of HARR Feature Extraction and Machine Learning Algorithms for Covid-19 X-Ray Image Classification

    Get PDF
    In this study, we investigated how effectively COVID-19 image categorization using Harr feature extraction and machine learning algorithms. We were particularly interested in the effectiveness of these algorithms. A dataset of 500 X-ray scans, equally split between 250 COVID-19-positive cases and 250 healthy controls, served as the basis for our study.  K-nearest neighbors,decision tree,  Linear regression, support vector machine, regression, classification, naive Bayes,random forest,  as well as linear discriminant analysis were among the seven machine-learning approaches used to categorize the photos. With the use of Harr feature extraction, the features of the pictures were extracted. We studied the efficacy of COVID-19 X-ray images for classification utilizing the combination of machine learning as well as the Harr feature extraction methods in the present investigation due to their effectiveness. We searched a database of 500 X-rays for this investigation, dividing them equally between groups of 250 patients with COVID-19-positive cases and 250 healthy people. Following that, the images were examined using seven various machine learning approaches for recognition. These methods included naive Bayes, linear discriminant analysis, random forests, classification,k-nearest neighbors,  and regression trees. The information from the photos was gathered using the Harr feature extraction method. The effectiveness of the algorithms was evaluated with the help of a variety of metrics, such asF1 score, precision,accuracy, recall, the area under the ROC curve, and the region of interest curve. According to our research, the Support Vector Machine algorithm had the highest accuracy, at 77%, while the Naive Bayes approach had the lowest accuracy, at 58%. By using machine learning and Harr feature extraction approaches, the Random Forest method yields the best results, based on our research. The development of future COVID-19 X-ray image-based automated diagnostic systems may be influenced by these findings. Results from the suggested model were comparable to those of cutting-edge models trained using transfer learning techniques. The proposed model's main advantage is that it has ten times fewer parameters than the most advanced models.A receiver operating characteristic (ROC) curve's F1 score, and the algorithms' accuracy, precision, the area under the curve,  and recall were all used as metrics. According to our findings, the Naive Bayes method gained the least accuracy (58%) and the Support Vector Machine method produced the highest accuracy (77%) when used. Our results reveal that employing Harr feature extraction and machine learning techniques, the Random Forest strategy is the most successful way to recognize COVID-19 X-ray pictures. These findings may be pertinent to the development of automated COVID-19 diagnosis tools relying on X-ray images. The recommended model produced results that were competitive when measured against cutting-edge models trained using transfer learning techniques. The suggested model employs 10 times fewer parameters than the most advanced models, which is its key selling point.&nbsp

    Feature Extraction and Classification of Automatically Segmented Lung Lesion Using Improved Toboggan Algorithm

    Full text link
    The accurate detection of lung lesions from computed tomography (CT) scans is essential for clinical diagnosis. It provides valuable information for treatment of lung cancer. However, the process is exigent to achieve a fully automatic lesion detection. Here, a novel segmentation algorithm is proposed, it's an improved toboggan algorithm with a three-step framework, which includes automatic seed point selection, multi-constraints lesion extraction and the lesion refinement. Then, the features like local binary pattern (LBP), wavelet, contourlet, grey level co-occurence matrix (GLCM) are applied to each region of interest of the segmented lung lesion image to extract the texture features such as contrast, homogeneity, energy, entropy and statistical extraction like mean, variance, standard deviation, convolution of modulated and normal frequencies. Finally, support vector machine (SVM) and K-nearest neighbour (KNN) classifiers are applied to classify the abnormal region based on the performance of the extracted features and their performance is been compared. The accuracy of 97.8% is been obtained by using SVM classifier when compared to KNN classifier. This approach does not require any human interaction for lesion detection. Thus, the improved toboggan algorithm can achieve precise lung lesion segmentation in CT images. The features extracted also helps to classify the lesion region of lungs efficiently

    Artificial Intelligence Techniques in Medical Imaging: A Systematic Review

    Get PDF
    This scientific review presents a comprehensive overview of medical imaging modalities and their diverse applications in artificial intelligence (AI)-based disease classification and segmentation. The paper begins by explaining the fundamental concepts of AI, machine learning (ML), and deep learning (DL). It provides a summary of their different types to establish a solid foundation for the subsequent analysis. The prmary focus of this study is to conduct a systematic review of research articles that examine disease classification and segmentation in different anatomical regions using AI methodologies. The analysis includes a thorough examination of the results reported in each article, extracting important insights and identifying emerging trends. Moreover, the paper critically discusses the challenges encountered during these studies, including issues related to data availability and quality, model generalization, and interpretability. The aim is to provide guidance for optimizing technique selection. The analysis highlights the prominence of hybrid approaches, which seamlessly integrate ML and DL techniques, in achieving effective and relevant results across various disease types. The promising potential of these hybrid models opens up new opportunities for future research in the field of medical diagnosis. Additionally, addressing the challenges posed by the limited availability of annotated medical images through the incorporation of medical image synthesis and transfer learning techniques is identified as a crucial focus for future research efforts
    corecore