70 research outputs found

    Using Multi-level Convolutional Neural Network for Classification of Lung Nodules on CT images

    Full text link
    © 2018 IEEE. Lung cancer is one of the four major cancers in the world. Accurate diagnosing of lung cancer in the early stage plays an important role to increase the survival rate. Computed Tomography (CT)is an effective method to help the doctor to detect the lung cancer. In this paper, we developed a multi-level convolutional neural network (ML-CNN)to investigate the problem of lung nodule malignancy classification. ML-CNN consists of three CNNs for extracting multi-scale features in lung nodule CT images. Furthermore, we flatten the output of the last pooling layer into a one-dimensional vector for every level and then concatenate them. This strategy can help to improve the performance of our model. The ML-CNN is applied to ternary classification of lung nodules (benign, indeterminate and malignant lung nodules). The experimental results show that our ML-CNN achieves 84.81\% accuracy without any additional hand-craft preprocessing algorithm. It is also indicated that our model achieves the best result in ternary classification

    Lung nodules identification in CT scans using multiple instance learning.

    Get PDF
    Computer Aided Diagnosis (CAD) systems for lung nodules diagnosis aim to classify nodules into benign or malignant based on images obtained from diverse imaging modalities such as Computer Tomography (CT). Automated CAD systems are important in medical domain applications as they assist radiologists in the time-consuming and labor-intensive diagnosis process. However, most available methods require a large collection of nodules that are segmented and annotated by radiologists. This process is labor-intensive and hard to scale to very large datasets. More recently, some CAD systems that are based on deep learning have emerged. These algorithms do not require the nodules to be segmented, and radiologists need to only provide the center of mass of each nodule. The training image patches are then extracted from volumes of fixed-sized centered at the provided nodule\u27s center. However, since the size of nodules can vary significantly, one fixed size volume may not represent all nodules effectively. This thesis proposes a Multiple Instance Learning (MIL) approach to address the above limitations. In MIL, each nodule is represented by a nested sequence of volumes centered at the identified center of the nodule. We extract one feature vector from each volume. The set of features for each nodule are combined and represented by a bag. Next, we investigate and adapt some existing algorithms and develop new ones for this application. We start by applying benchmark MIL algorithms to traditional Gray Level Co-occurrence Matrix (GLCM) engineered features. Then, we design and train simple Convolutional Neural Networks (CNNs) to learn and extract features that characterize lung nodules. These extracted features are then fed to a benchmark MIL algorithm to learn a classification model. Finally, we develop new algorithms (MIL-CNN) that combine feature learning and multiple instance classification in a single network. These algorithms generalize the CNN architecture to multiple instance data. We design and report the results of three experiments applied on both generative (GLCM) and learned (CNN) features using two datasets (The Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) \cite{armato2011lung} and the National Lung Screening Trial (NLST) \cite{national2011reduced}). Two of these experiments perform five-fold cross-validations on the same dataset (NLST or LIDC). The third experiment trains the algorithms on one collection (NLST dataset) and tests it on the other (LIDC dataset). We designed our experiments to compare the different features, compare MIL versus Single Instance Learning (SIL) where a single feature vector represents a nodule, and compare our proposed end-to-end MIL approaches to existing benchmark MIL methods. We demonstrate that our proposed MIL-CNN frameworks are more accurate for the lung nodules diagnosis task. We also show that MIL representation achieves better results than SIL applied on the ground truth region of each nodule

    Are Deep Learning Classification Results Obtained on CT Scans Fair and Interpretable?

    Full text link
    Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the CT scan of a person to be in the training set, while other images of the exact same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat-map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets.Comment: This version has been submitted to CAAI Transactions on Intelligence Technology. 202

    Lung cancer medical images classification using hybrid CNN-SVM

    Get PDF
    Lung cancer is one of the leading causes of death worldwide. Early detection of this disease increases the chances of survival. Computer-Aided Detection (CAD) has been used to process CT images of the lung to determine whether an image has traces of cancer. This paper presents an image classification method based on the hybrid Convolutional Neural Network (CNN) algorithm and Support Vector Machine (SVM). This algorithm is capable of automatically classifying and analyzing each lung image to check if there is any presence of cancer cells or not. CNN is easier to train and has fewer parameters compared to a fully connected network with the same number of hidden units. Moreover, SVM has been utilized to eliminate useless information that affects accuracy negatively. In recent years, Convolutional Neural Networks (CNNs) have achieved excellent performance in many computer visions tasks. In this study, the performance of this algorithm is evaluated, and the results indicated that our proposed CNN-SVM algorithm has been succeed in classifying lung images with 97.91% accuracy. This has shown the method's merit and its ability to classify lung cancer in CT images accurately
    • …
    corecore