316 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Pre-training autoencoder for lung nodule malignancy assessment using CT images

    Get PDF
    Lung cancer late diagnosis has a large impact on the mortality rate numbers, leading to a very low five-year survival rate of 5%. This issue emphasises the importance of developing systems to support a diagnostic at earlier stages. Clinicians use Computed Tomography (CT) scans to assess the nodules and the likelihood of malignancy. Automatic solutions can help to make a faster and more accurate diagnosis, which is crucial for the early detection of lung cancer. Convolutional neural networks (CNN) based approaches have shown to provide a reliable feature extraction ability to detect the malignancy risk associated with pulmonary nodules. This type of approach requires a massive amount of data to model training, which usually represents a limitation in the biomedical field due to medical data privacy and security issues. Transfer learning (TL) methods have been widely explored in medical imaging applications, offering a solution to overcome problems related to the lack of training data publicly available. For the clinical annotations experts with a deep understanding of the complex physiological phenomena represented in the data are required, which represents a huge investment. In this direction, this work explored a TL method based on unsupervised learning achieved when training a Convolutional Autoencoder (CAE) using images in the same domain. For this, lung nodules from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) were extracted and used to train a CAE. Then, the encoder part was transferred, and the malignancy risk was assessed in a binary classification—benign and malignant lung nodules, achieving an Area Under the Curve (AUC) value of 0.936. To evaluate the reliability of this TL approach, the same architecture was trained from scratch and achieved an AUC value of 0.928. The results reported in this comparison suggested that the feature learning achieved when reconstructing the input with an encoder-decoder based architecture can be considered an useful knowledge that might allow overcoming labelling constraints.This work is financed by National Funds through the Portuguese funding agency, FCT—Fundação para a Ciência e a Tecnologia within project UIDB/50014/2020

    Autoencoder-based Image Recommendation for Lung Cancer Characterization

    Get PDF
    Neste projeto, temos como objetivo desenvolver um sistema de IA que recomende um conjunto de casos relativos (passados) para orientar a tomada de decisão do médico. Objetivo: A ambição é desenvolver um modelo de aprendizado baseado em IA para caracterização de câncer de pulmão, a fim de auxiliar na rotina clínica. Considerando a complexidade dos fenômenos biológicos que ocorrem durante o desenvolvimento do câncer, as relações entre eles e as manifestações visuais capturadas pela tomografia computadorizada (CT) têm sido exploradas nos últimos anos. No entanto, devido à falta de robustez dos métodos atuais de aprendizado profundo, essas correlações são frequentemente consideradas espúrias e se perdem quando confrontadas com dados coletados a partir de distribuições alteradas: diferentes instituições, características demográficas ou até mesmo estágios de desenvolvimento do câncer.In this project, we aim to develop an AI system that recommends a set of relative (past) cases to guide the decision-making of the clinician. Objective: The ambition is to develop an AI-based learning model for lung cancer characterization in order to assist in clinical routine. Considering the complexity of the biological phenomenat hat occur during cancer development, relationships between these and visual manifestations captured by CT have been explored in recent years; however, given the lack of robustness of current deep learning methods, these correlations are often found spurious and get lost when facing data collected from shifted distributions: different institutions, demographics or even stages of cancer development

    A generalized deep learning-based diagnostic system for early diagnosis of various types of pulmonary nodules

    Get PDF
    © The Author(s) 2018. A novel framework for the classification of lung nodules using computed tomography scans is proposed in this article. To get an accurate diagnosis of the detected lung nodules, the proposed framework integrates the following 2 groups of features: (1) appearance features modeled using the higher order Markov Gibbs random field model that has the ability to describe the spatial inhomogeneities inside the lung nodule and (2) geometric features that describe the shape geometry of the lung nodules. The novelty of this article is to accurately model the appearance of the detected lung nodules using a new developed seventh-order Markov Gibbs random field model that has the ability to model the existing spatial inhomogeneities for both small and large detected lung nodules, in addition to the integration with the extracted geometric features. Finally, a deep autoencoder classifier is fed by the above 2 feature groups to distinguish between the malignant and benign nodules. To evaluate the proposed framework, we used the publicly available data from the Lung Image Database Consortium. We used a total of 727 nodules that were collected from 467 patients. The proposed system demonstrates the promise to be a valuable tool for the detection of lung cancer evidenced by achieving a nodule classification accuracy of 91.20%

    An Integrated Framework for the Detection of Lung Nodules from Multimodal Images Using Segmentation Network and Generative Adversarial Network Techniques

    Get PDF
    Medical imaging techniques are providing promising results in identifying abnormalities in tissues. The presence of such tissues leads to further investigation on these cells in particular. Lung cancer is seen widely and is deadliest in nature if not detected and treated at an early stage. Medical imaging techniques help to identify the presence of suspicious tissues like lung nodules effectively. But it is very difficult to know the presence of the nodule at an early stage with the help of a single imaging modality. The proposed system increases the efficiency of the system and helps to identify the presence of lung nodules at an early stage. This is achieved by combining different methods for reaching a common outcome. Multiple schemes are combined and the extracted features are used for obtaining a conclusion. The accuracy of the system and the results depend on the quality and quantity of the authentic training data. But the availability of the data from an authentic source for the study is a challenging task. Here the generative adversarial network (GAN), is used as a data source generator. It helps to generate a huge amount of reliable data by using a minimum number of real time and authentic data set. Images generated by the GAN are of resolution 1024 x 1024.Fine tuning of the images by using the real images increases the quality of the generated images and thereby improving the efficiency.   Luna 16 is the primary data source and these images are used for the generation of 1000000 images. Training process with the huge dataset improves the capability of the proposed system. Various parameters are considered for evaluating the performance of the proposed system. Comparative analysis with existing systems highlights the strengths of the proposed system

    Deep Learning Paradigms for Existing and Imminent Lung Diseases Detection: A Review

    Get PDF
    Diagnosis of lung diseases like asthma, chronic obstructive pulmonary disease, tuberculosis, cancer, etc., by clinicians rely on images taken through various means like X-ray and MRI. Deep Learning (DL) paradigm has magnified growth in the medical image field in current years. With the advancement of DL, lung diseases in medical images can be efficiently identified and classified. For example, DL can detect lung cancer with an accuracy of 99.49% in supervised models and 95.3% in unsupervised models. The deep learning models can extract unattended features that can be effortlessly combined into the DL network architecture for better medical image examination of one or two lung diseases. In this review article, effective techniques are reviewed under the elementary DL models, viz. supervised, semi-supervised, and unsupervised Learning to represent the growth of DL in lung disease detection with lesser human intervention. Recent techniques are added to understand the paradigm shift and future research prospects. All three techniques used Computed Tomography (C.T.) images datasets till 2019, but after the pandemic period, chest radiographs (X-rays) datasets are more commonly used. X-rays help in the economically early detection of lung diseases that will save lives by providing early treatment. Each DL model focuses on identifying a few features of lung diseases. Researchers can explore the DL to automate the detection of more lung diseases through a standard system using datasets of X-ray images. Unsupervised DL has been extended from detection to prediction of lung diseases, which is a critical milestone to seek out the odds of lung sickness before it happens. Researchers can work on more prediction models identifying the severity stages of multiple lung diseases to reduce mortality rates and the associated cost. The review article aims to help researchers explore Deep Learning systems that can efficiently identify and predict lung diseases at enhanced accuracy

    Deep Functional Mapping For Predicting Cancer Outcome

    Get PDF
    The effective understanding of the biological behavior and prognosis of cancer subtypes is becoming very important in-patient administration. Cancer is a diverse disorder in which a significant medical progression and diagnosis for each subtype can be observed and characterized. Computer-aided diagnosis for early detection and diagnosis of many kinds of diseases has evolved in the last decade. In this research, we address challenges associated with multi-organ disease diagnosis and recommend numerous models for enhanced analysis. We concentrate on evaluating the Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Positron Emission Tomography (PET) for brain, lung, and breast scans to detect, segment, and classify types of cancer from biomedical images. Moreover, histopathological, and genomic classification of cancer prognosis has been considered for multi-organ disease diagnosis and biomarker recommendation. We considered multi-modal, multi-class classification during this study. We are proposing implementing deep learning techniques based on Convolutional Neural Network and Generative Adversarial Network. In our proposed research we plan to demonstrate ways to increase the performance of the disease diagnosis by focusing on a combined diagnosis of histology, image processing, and genomics. It has been observed that the combination of medical imaging and gene expression can effectively handle the cancer detection situation with a higher diagnostic rate rather than considering the individual disease diagnosis. This research puts forward a blockchain-based system that facilitates interpretations and enhancements pertaining to automated biomedical systems. In this scheme, a secured sharing of the biomedical images and gene expression has been established. To maintain the secured sharing of the biomedical contents in a distributed system or among the hospitals, a blockchain-based algorithm is considered that generates a secure sequence to identity a hash key. This adaptive feature enables the algorithm to use multiple data types and combines various biomedical images and text records. All data related to patients, including identity, pathological records are encrypted using private key cryptography based on blockchain architecture to maintain data privacy and secure sharing of the biomedical contents

    Feature Extraction and Design in Deep Learning Models

    Get PDF
    The selection and computation of meaningful features is critical for developing good deep learning methods. This dissertation demonstrates how focusing on this process can significantly improve the results of learning-based approaches. Specifically, this dissertation presents a series of different studies in which feature extraction and design was a significant factor for obtaining effective results. The first two studies are a content-based image retrieval system (CBIR) and a seagrass quantification study in which deep learning models were used to extract meaningful high-level features that significantly increased the performance of the approaches. Secondly, a method for change detection is proposed where the multispectral channels of satellite images are combined with different feature indices to improve the results. Then, two novel feature operators for mesh convolutional networks are presented that successfully extract invariant features from the faces and vertices of a mesh, respectively. The novel feature operators significantly outperform the previous state of the art for mesh classification and segmentation and provide two novel architectures for applying convolutional operations to the faces and vertices of geometric 3D meshes. Finally, a novel approach for automatic generation of 3D meshes is presented. The generative model efficiently uses the vertex-based feature operators proposed in the previous study and successfully learns to produce shapes from a mesh dataset with arbitrary topology
    • …
    corecore