146 research outputs found
Deep learning for identifying Lung Diseases
Growing health problems, such as lung diseases, especially for children and the elderly, require better diagnostic methods, such as computer-based solutions, and it is crucial to detect and treat these problems early. The purpose of this article is to design and implement a new computer vision-based algorithm based on lung disease diagnosis, which has better performance in lung disease recognition than previous models to reduce lung-related health problems and costs . In addition, we have improved the accuracy of the five lung diseases detection, which helps doctors and doctors use computers to solve this problem at an early stage
Lung nodules identification in CT scans using multiple instance learning.
Computer Aided Diagnosis (CAD) systems for lung nodules diagnosis aim to classify nodules into benign or malignant based on images obtained from diverse imaging modalities such as Computer Tomography (CT). Automated CAD systems are important in medical domain applications as they assist radiologists in the time-consuming and labor-intensive diagnosis process. However, most available methods require a large collection of nodules that are segmented and annotated by radiologists. This process is labor-intensive and hard to scale to very large datasets. More recently, some CAD systems that are based on deep learning have emerged. These algorithms do not require the nodules to be segmented, and radiologists need to only provide the center of mass of each nodule. The training image patches are then extracted from volumes of fixed-sized centered at the provided nodule\u27s center. However, since the size of nodules can vary significantly, one fixed size volume may not represent all nodules effectively. This thesis proposes a Multiple Instance Learning (MIL) approach to address the above limitations. In MIL, each nodule is represented by a nested sequence of volumes centered at the identified center of the nodule. We extract one feature vector from each volume. The set of features for each nodule are combined and represented by a bag. Next, we investigate and adapt some existing algorithms and develop new ones for this application. We start by applying benchmark MIL algorithms to traditional Gray Level Co-occurrence Matrix (GLCM) engineered features. Then, we design and train simple Convolutional Neural Networks (CNNs) to learn and extract features that characterize lung nodules. These extracted features are then fed to a benchmark MIL algorithm to learn a classification model. Finally, we develop new algorithms (MIL-CNN) that combine feature learning and multiple instance classification in a single network. These algorithms generalize the CNN architecture to multiple instance data. We design and report the results of three experiments applied on both generative (GLCM) and learned (CNN) features using two datasets (The Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) \cite{armato2011lung} and the National Lung Screening Trial (NLST) \cite{national2011reduced}). Two of these experiments perform five-fold cross-validations on the same dataset (NLST or LIDC). The third experiment trains the algorithms on one collection (NLST dataset) and tests it on the other (LIDC dataset). We designed our experiments to compare the different features, compare MIL versus Single Instance Learning (SIL) where a single feature vector represents a nodule, and compare our proposed end-to-end MIL approaches to existing benchmark MIL methods. We demonstrate that our proposed MIL-CNN frameworks are more accurate for the lung nodules diagnosis task. We also show that MIL representation achieves better results than SIL applied on the ground truth region of each nodule
Pulmonary nodule segmentation in computed tomography with deep learning
Early detection of lung cancer is essential for treating the disease. Lung nodule segmentation systems can be used together with Computer-Aided Detection (CAD) systems, and
help doctors diagnose and manage lung cancer. In this work, we create a lung nodule
segmentation system based on deep learning. Deep learning is a sub-field of machine
learning responsible for state-of-the-art results in several segmentation datasets such as
the PASCAL VOC 2012. Our model is a modified 3D U-Net, trained on the LIDC-IDRI
dataset, using the intersection over union (IOU) loss function. We show our model works
for multiple types of lung nodules. Our model achieves state-of-the-art performance on
the LIDC test set, using nodules annotated by at least 3 radiologists and with a consensus
truth of 50%.A deteção do cancro do pulmão numa fase inicial é essencial para o tratamento da doença.
Sistemas de segmentação de nódulos pulmonares, usados em junção com sistemas de
Deteção Assistida por Computador (DAC), podem ajudar médicos a diagnosticar e gerir
o cancro do pulmão. Neste trabalho propomos um sistema de segmentação de nódulos
pulmonares, recorrendo a técnicas de aprendizagem profunda. Aprendizagem profunda é
um sub-campo de aprendizagem automática, responsável por vários resultados estado da
arte em datasets de segmentação de imagem, como o PASCAL VOC 2012. O nosso modelo
final é uma 3D U-Net modificada, treinada no dataset LIDC-IDRI, usando interseção sobre
união como função de custo. Mostramos que o nosso modelo final funciona com vários
tipos de nódulos pulmonares. O nosso modelo consegue resultados estado da arte no
LIDC test set, usando nódulos anotados pelo menos por 3 radiologistas, com uma verdade
consensual de 50%
Intelligent diagnostic scheme for lung cancer screening with Raman spectra data by tensor network machine learning
Artificial intelligence (AI) has brought tremendous impacts on biomedical
sciences from academic researches to clinical applications, such as in
biomarkers' detection and diagnosis, optimization of treatment, and
identification of new therapeutic targets in drug discovery. However, the
contemporary AI technologies, particularly deep machine learning (ML), severely
suffer from non-interpretability, which might uncontrollably lead to incorrect
predictions. Interpretability is particularly crucial to ML for clinical
diagnosis as the consumers must gain necessary sense of security and trust from
firm grounds or convincing interpretations. In this work, we propose a
tensor-network (TN)-ML method to reliably predict lung cancer patients and
their stages via screening Raman spectra data of Volatile organic compounds
(VOCs) in exhaled breath, which are generally suitable as biomarkers and are
considered to be an ideal way for non-invasive lung cancer screening. The
prediction of TN-ML is based on the mutual distances of the breath samples
mapped to the quantum Hilbert space. Thanks to the quantum probabilistic
interpretation, the certainty of the predictions can be quantitatively
characterized. The accuracy of the samples with high certainty is almost
100. The incorrectly-classified samples exhibit obviously lower certainty,
and thus can be decipherably identified as anomalies, which will be handled by
human experts to guarantee high reliability. Our work sheds light on shifting
the ``AI for biomedical sciences'' from the conventional non-interpretable ML
schemes to the interpretable human-ML interactive approaches, for the purpose
of high accuracy and reliability.Comment: 10 pages, 7 figure
Enhanced Convolutional Neural Network for Non-Small Cell Lung Cancer Classification
Lung cancer is a common type of cancer that causes death if not detectedearly enough. Doctors use computed tomography (CT) images to diagnoselung cancer. The accuracy of the diagnosis relies highly on the doctor\u27sexpertise. Recently, clinical decision support systems based on deep learningvaluable recommendations to doctors in their diagnoses. In this paper, wepresent several deep learning models to detect non-small cell lung cancer inCT images and differentiate its main subtypes namely adenocarcinoma,large cell carcinoma, and squamous cell carcinoma. We adopted standardconvolutional neural networks (CNN), visual geometry group-16 (VGG16),and VGG19. Besides, we introduce a variant of the CNN that is augmentedwith convolutional block attention modules (CBAM). CBAM aims to extractinformative features by combining cross-channel and spatial information.We also propose variants of VGG16 and VGG19 that utilize a supportvector machine (SVM) at the classification layer instead of SoftMax. Wevalidated all models in this study through extensive experiments on a CTlung cancer dataset. Experimental results show that supplementing CNNwith CBAM leads to consistent improvements over vanilla CNN. Resultsalso show that the VGG variants that use the SVM classifier outperform theoriginal VGGs by a significant margin
Classification of malignant and benign lung nodule and prediction of image label class using multi-deep model
Lung cancer has been listed as one of the world’s leading causes of death. Early diagnosis of lung nodules has great significance for the prevention of lung cancer. Despite major improvements in modern diagnosis and treatment, the five-year survival rate is only 18%. Before diagnosis, the classification of lung nodules is one important step, in particular, because automatic classification may help doctors with a valuable opinion. Although deep learning has shown improvement in the image classifications over traditional approaches, which focus on handcraft features, due to a large number of intra-class variational images and the inter-class similar images due to various imaging modalities, it remains challenging to classify lung nodule. In this paper, a multi-deep model (MD model) is proposed for lung nodule classification as well as to predict the image label class. This model is based on three phases that include multi-scale dilated convolutional blocks (MsDc), dual deep convolutional neural networks (DCNN A/B), and multi-task learning component (MTLc). Initially, the multi-scale features are derived through the MsDc process by using different dilated rates to enlarge the respective area. This technique is applied to a pair of images. Such images are accepted by dual DCNNs, and both models can learn mutually from each other in order to enhance the model accuracy. To further improve the performance of the proposed model, the output from both DCNNs split into two portions. The multi-task learning part is used to evaluate whether the input image pair is in the same group or not and also helps to classify them between benign and malignant. Furthermore, it can provide positive guidance if there is an error. Both the intra-class and inter-class (variation and similarity) of a dataset itself increase the efficiency of single DCNN. The effectiveness of mentioned technique is tested empirically by using the popular Lung Image Consortium Database (LIDC) dataset. The results show that the strategy is highly efficient in the form of sensitivity of 90.67%, specificity 90.80%, and accuracy of 90.73%
- …