65 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Enhanced Convolutional Neural Network for Non-Small Cell Lung Cancer Classification

    Get PDF
    Lung cancer is a common type of cancer that causes death if not detectedearly enough. Doctors use computed tomography (CT) images to diagnoselung cancer. The accuracy of the diagnosis relies highly on the doctor\u27sexpertise. Recently, clinical decision support systems based on deep learningvaluable recommendations to doctors in their diagnoses. In this paper, wepresent several deep learning models to detect non-small cell lung cancer inCT images and differentiate its main subtypes namely adenocarcinoma,large cell carcinoma, and squamous cell carcinoma. We adopted standardconvolutional neural networks (CNN), visual geometry group-16 (VGG16),and VGG19. Besides, we introduce a variant of the CNN that is augmentedwith convolutional block attention modules (CBAM). CBAM aims to extractinformative features by combining cross-channel and spatial information.We also propose variants of VGG16 and VGG19 that utilize a supportvector machine (SVM) at the classification layer instead of SoftMax. Wevalidated all models in this study through extensive experiments on a CTlung cancer dataset. Experimental results show that supplementing CNNwith CBAM leads to consistent improvements over vanilla CNN. Resultsalso show that the VGG variants that use the SVM classifier outperform theoriginal VGGs by a significant margin

    Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey

    Get PDF
    Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks

    Development and application in clinical practice of Computer-aided Diagnosis systems for the early detection of lung cancer

    Get PDF
    Lung cancer is the main cause of cancer-related deaths both in Europe and United States, because often it is diagnosed at late stages of the disease, when the survival rate is very low if compared to first asymptomatic stage. Lung cancer screening using annual low-dose Computed Tomography (CT) reduces lung cancer 5-year mortality by about 20% in comparison to annual screening with chest radiography. However, the detection of pulmonary nodules in low-dose chest CT scans is a very difficult task for radiologists, because of the large number (300/500) of slices to be analyzed. In order to support radiologists, researchers have developed Computer aided Detection (CAD) algorithms for the automated detection of pulmonary nodules in chest CT scans. Despite proved benefits of those systems on the radiologists detection sensitivity, the usage of CADs in clinical practice has not spread yet. The main objective of this thesis is to investigate and tackle the issues underlying this inconsistency. In particular, in Chapter 2 we introduce M5L, a fully automated Web and Cloud-based CAD for the automated detection of pulmonary nodules in chest CT scans. This system introduces a new paradigm in clinical practice, by making available CAD systems without requiring to radiologists any additional software and hardware installation. The proposed solution provides an innovative cost-effective approach for clinical structures. In Chapter 3 we present our international challenge aiming at a large-scale validation of state-of-the-art CAD systems. We also investigate and prove how the combination of different CAD systems reaches performances much higher than any best stand-alone system developed so far. Our results open the possibility to introduce in clinical practice very high-performing CAD systems, which miss a tiny fraction of clinically relevant nodules. Finally, we tested the performance of M5L on clinical data-sets. In chapter 4 we present the results of its clinical validation, which prove the positive impact of CAD as second reader in the diagnosis of pulmonary metastases on oncological patients with extra-thoracic cancers. The proposed approaches have the potential to exploit at best the features of different algorithms, developed independently, for any possible clinical application, setting a collaborative environment for algorithm comparison, combination, clinical validation and, if all of the above were successful, clinical practice

    Deep Learning Based Medical Image Analysis with Limited Data

    Full text link
    Deep Learning Methods have shown its great effort in the area of Computer Vision. However, when solving the problems of medical imaging, deep learning’s power is confined by limited data available. We present a series of novel methodologies for solving medical imaging analysis problems with limited Computed tomography (CT) scans available. Our method, based on deep learning, with different strategies, including using Generative Adversar- ial Networks, two-stage training, infusing the expert knowledge, voting based or converting to other space, solves the data set limitation issue for the cur- rent medical imaging problems, specifically cancer detection and diagnosis, and shows very good performance and outperforms the state-of-art results in the literature. With the self-learned features, deep learning based techniques start to be applied to the biomedical imaging problems and various structures have been designed. In spite of its simplity and anticipated good performance, the deep learning based techniques can not perform to its best extent due to the limited size of data sets for the medical imaging problems. On the other side, the traditional hand-engineered features based methods have been studied in the past decades and a lot of useful features have been found by these research for the task of detecting and diagnosing the pulmonary nod- ules on CT scans, but these methods are usually performed through a series of complicated procedures with manually empirical parameter adjustments. Our method significantly reduces the complications of the traditional proce- dures for pulmonary nodules detection, while retaining and even outperforming the state-of-art accuracy. Besides, we make contribution on how to convert low-dose CT image to full-dose CT so as to adapting current models on the newly-emerged low-dose CT data

    The Effectiveness of Transfer Learning Systems on Medical Images

    Get PDF
    Deep neural networks have revolutionized the performances of many machine learning tasks such as medical image classification and segmentation. Current deep learning (DL) algorithms, specifically convolutional neural networks are increasingly becoming the methodological choice for most medical image analysis. However, training these deep neural networks requires high computational resources and very large amounts of labeled data which is often expensive and laborious. Meanwhile, recent studies have shown the transfer learning (TL) paradigm as an attractive choice in providing promising solutions to challenges of shortage in the availability of labeled medical images. Accordingly, TL enables us to leverage the knowledge learned from related data to solve a new problem. The objective of this dissertation is to examine the effectiveness of TL systems on medical images. First, a comprehensive systematic literature review was performed to provide an up-to-date status of TL systems on medical images. Specifically, we proposed a novel conceptual framework to organize the review. Second, a novel DL network was pretrained on natural images and utilized to evaluate the effectiveness of TL on a very large medical image dataset, specifically Chest X-rays images. Lastly, domain adaptation using an autoencoder was evaluated on the medical image dataset and the results confirmed the effectiveness of TL through fine-tuning strategies. We make several contributions to TL systems on medical image analysis: Firstly, we present a novel survey of TL on medical images and propose a new conceptual framework to organize the findings. Secondly, we propose a novel DL architecture to improve learned representations of medical images while mitigating the problem of vanishing gradients. Additionally, we identified the optimal cut-off layer (OCL) that provided the best model performance. We found that the higher layers in the proposed deep model give a better feature representation of our medical image task. Finally, we analyzed the effect of domain adaptation by fine-tuning an autoencoder on our medical images and provide theoretical contributions on the application of the transductive TL approach. The contributions herein reveal several research gaps to motivate future research and contribute to the body of literature in this active research area of TL systems on medical image analysis

    Capsule Network-based COVID-19 Diagnosis and Transformer-based Lung Cancer Invasiveness Prediction via Computerized Tomography (CT) Images

    Get PDF
    Early diagnosis and prognosis of life-threatening diseases such as the novel coronavirus infection (COVID-19) and Lung Cancer (LC), involves tackling critical challenges including but not limited to their undisclosed characteristics, non-stationary nature, significant inter-disease similarities, and intra-disease variations. In particular, within the context of a highly contagious disease such as COVID-19, early and reliable diagnosis is of significant importance. On the other hand, when it comes to diagnosis and prognosis of LC, an accurate prediction of the disease invasiveness becomes of primary importance. Recent advancements of Artificial Intelligence (AI) and Deep Learning (DL)-based architectures have resulted in a surge of interest in the utilization of medical images to develop decision support and stand-alone models to address the aforementioned challenges. In this context, the focus of the thesis is on the utilization of volumetric chest CT images to develop robust and fully automated diagnostic frameworks for COVID-19 diagnosis and LC invasiveness prediction. In particular, Capsule Network (CapsNet) and Transformer-based architectures are developed to expand the application of AI in this domain. More specifically, first, CT-CAPS and COVID-FACT frameworks are proposed to analyze CT images, identify slices demonstrating infection, and perform patient-level classification of COVID-19. The proposed frameworks are developed based on the CapsNet architecture, which unlike the widely-used Convolutional Neural Networks (CNNs), is capable of capturing spatial relations among instances in an image and being trained on small datasets. These characteristics are of utmost importance when analyzing a newly emerged disease with specific spatial patterns in its images. Furthermore, following the recent and ever-increasing interest in using Low-Dose and Ultra-Low-Dose CT scans (LDCT and ULDCT) for COVID-19 screening, the WSO-CAPS framework is proposed to enhance performance of the proposed models to deal with noisy and low-quality CT scans. In addition, given that CT scans acquired from multiple centers and cohorts mainly show different qualities and characteristics, which negatively affect the generalizability of DL-based models, a unique multi-center dataset of CT scans, referred to as the “SPGC-COVID Dataset”, is constructed, which incorporates CT scans of COVID-19, Community Acquired Pneumonia (CAP), and normal cases, obtained using standard and low-dose imaging protocols. An enhancement approach is then proposed to boost the performance of the developed classification frameworks when being tested on varied CT scans in the SPGC-COVID dataset. With respect to the second objective of this thesis (i.e., Lung Cancer invasiveness prediction), the CAE-Transformer framework is proposed, which utilizes image-driven features to predict the invasiveness of Lung Adenocarcinomas (LUACs) from non-thin 3D CT scans. The proposed framework introduces a new viewpoint in CT scan analysis, which relies on the sequential nature of the volumetric CT scans. More specifically, the CAE-Transformer adopts the transformer architecture, which was initially designed for sequential data, to capture inter-slice dependencies in an efficient and non-complex fashion
    corecore