363 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Segmentation Of Intracranial Structures From Noncontrast Ct Images With Deep Learning

    Get PDF
    Presented in this work is an investigation of the application of artificially intelligent algorithms, namely deep learning, to generate segmentations for the application in functional avoidance radiotherapy treatment planning. Specific applications of deep learning for functional avoidance include generating hippocampus segmentations from computed tomography (CT) images and generating synthetic pulmonary perfusion images from four-dimensional CT (4DCT).A single institution dataset of 390 patients treated with Gamma Knife stereotactic radiosurgery was created. From these patients, the hippocampus was manually segmented on the high-resolution MR image and used for the development of the data processing methodology and model testing. It was determined that an attention-gated 3D residual network performed the best, with 80.2% of contours meeting the clinical trial acceptability criteria. After having determined the highest performing model architecture, the model was tested on data from the RTOG-0933 Phase II multi-institutional clinical trial for hippocampal avoidance whole brain radiotherapy. From the RTOG-0933 data, an institutional observer (IO) generated contours to compare the deep learning style and the style of the physicians participating in the phase II trial. The deep learning model performance was compared with contour comparison and radiotherapy treatment planning. Results showed that the deep learning contours generated plans comparable to the IO style, but differed significantly from the phase II contours, indicating further investigation is required before this technology can be apply clinically. Additionally, motivated by the observed deviation in contouring styles of the trial’s participating treating physicians, the utility of applying deep learning as a first-pass quality assurance measure was investigated. To simulate a central review, the IO contours were compared to the treating physician contours in attempt to identify unacceptable deviations. The deep learning model was found to have an AUC of 0.80 for left, 0.79 for right hippocampus, thus indicating the potential applications of deep learning as a first-pass quality assurance tool. The methods developed during the hippocampal segmentation task were then translated to the generation of synthetic pulmonary perfusion imaging for use in functional lung avoidance radiotherapy. A clinical data set of 58 pre- and post-radiotherapy SPECT perfusion studies (32 patients) with contemporaneous 4DCT studies were collected. From the data set, 50 studies were used to train a 3D-residual network, with a five-fold validation used to select the highest performing model instances (N=5). The highest performing instances were tested on a 5 patient (8 study) hold-out test set. From these predictions, 50th percentile contours of well-perfused lung were generated and compared to contours from the clinical SPECT perfusion images. On the test set the Spearman correlation coefficient was strong (0.70, IQR: 0.61-0.76) and the functional avoidance contours agreed well Dice of 0.803 (IQR: 0.750-0.810), average surface distance of 5.92 mm (IQR: 5.68-7.55) mm. This study indicates the potential applications of deep learning for the generation of synthetic pulmonary perfusion images but requires an expanded dataset for additional model testing

    Multi-site, Multi-domain Airway Tree Modeling (ATM'22): A Public Benchmark for Pulmonary Airway Segmentation

    Full text link
    Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and clinical drive for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage.Comment: 32 pages, 16 figures. Homepage: https://atm22.grand-challenge.org/. Submitte

    Implementable Deep Learning for Multi-sequence Proton MRI Lung Segmentation:A Multi-center, Multi-vendor, and Multi-disease Study

    Get PDF
    Background: Recently, deep learning via convolutional neural networks (CNNs) has largely superseded conventional methods for proton (1H)-MRI lung segmentation. However, previous deep learning studies have utilized single-center data and limited acquisition parameters.Purpose: Develop a generalizable CNN for lung segmentation in 1H-MRI, robust to pathology, acquisition protocol, vendor, and center.Study type: Retrospective.Population: A total of 809 1H-MRI scans from 258 participants with various pulmonary pathologies (median age (range): 57 (6–85); 42% females) and 31 healthy participants (median age (range): 34 (23–76); 34% females) that were split into training (593 scans (74%); 157 participants (55%)), testing (50 scans (6%); 50 participants (17%)) and external validation (164 scans (20%); 82 participants (28%)) sets.Field Strength/Sequence: 1.5-T and 3-T/3D spoiled-gradient recalled and ultrashort echo-time 1H-MRI.Assessment: 2D and 3D CNNs, trained on single-center, multi-sequence data, and the conventional spatial fuzzy c-means (SFCM) method were compared to manually delineated expert segmentations. Each method was validated on external data originating from several centers. Dice similarity coefficient (DSC), average boundary Hausdorff distance (Average HD), and relative error (XOR) metrics to assess segmentation performance.Statistical Tests: Kruskal–Wallis tests assessed significances of differences between acquisitions in the testing set. Friedman tests with post hoc multiple comparisons assessed differences between the 2D CNN, 3D CNN, and SFCM. Bland–Altman analyses assessed agreement with manually derived lung volumes. A P value of &lt;0.05 was considered statistically significant.Results: The 3D CNN significantly outperformed its 2D analog and SFCM, yielding a median (range) DSC of 0.961 (0.880–0.987), Average HD of 1.63 mm (0.65–5.45) and XOR of 0.079 (0.025–0.240) on the testing set and a DSC of 0.973 (0.866–0.987), Average HD of 1.11 mm (0.47–8.13) and XOR of 0.054 (0.026–0.255) on external validation data.Data Conclusion: The 3D CNN generated accurate 1H-MRI lung segmentations on a heterogenous dataset, demonstrating robustness to disease pathology, sequence, vendor, and center.Evidence Level: 4.Technical Efficacy: Stage 1.</p
    • …
    corecore