706 research outputs found

    Pan-cancer classifications of tumor histological images using deep learning

    Get PDF
    Histopathological images are essential for the diagnosis of cancer type and selection of optimal treatment. However, the current clinical process of manual inspection of images is time consuming and prone to intra- and inter-observer variability. Here we show that key aspects of cancer image analysis can be performed by deep convolutional neural networks (CNNs) across a wide spectrum of cancer types. In particular, we implement CNN architectures based on Google Inception v3 transfer learning to analyze 27815 H&E slides from 23 cohorts in The Cancer Genome Atlas in studies of tumor/normal status, cancer subtype, and mutation status. For 19 solid cancer types we are able to classify tumor/normal status of whole slide images with extremely high AUCs (0.995±0.008). We are also able to classify cancer subtypes within 10 tissue types with AUC values well above random expectations (micro-average 0.87±0.1). We then perform a cross-classification analysis of tumor/normal status across tumor types. We find that classifiers trained on one type are often effective in distinguishing tumor from normal in other cancer types, with the relationships among classifiers matching known cancer tissue relationships. For the more challenging problem of mutational status, we are able to classify TP53 mutations in three cancer types with AUCs from 0.65-0.80 using a fully-trained CNN, and with similar cross-classification accuracy across tissues. These studies demonstrate the power of CNNs for not only classifying histopathological images in diverse cancer types, but also for revealing shared biology between tumors. We have made software available at: https://github.com/javadnoorb/HistCNNFirst author draf

    A Deep Learning Study on Osteosarcoma Detection from Histological Images

    Full text link
    In the U.S, 5-10\% of new pediatric cases of cancer are primary bone tumors. The most common type of primary malignant bone tumor is osteosarcoma. The intention of the present work is to improve the detection and diagnosis of osteosarcoma using computer-aided detection (CAD) and diagnosis (CADx). Such tools as convolutional neural networks (CNNs) can significantly decrease the surgeon's workload and make a better prognosis of patient conditions. CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance. In this study, transfer learning techniques, pre-trained CNNs, are adapted to a public dataset on osteosarcoma histological images to detect necrotic images from non-necrotic and healthy tissues. First, the dataset was preprocessed, and different classifications are applied. Then, Transfer learning models including VGG19 and Inception V3 are used and trained on Whole Slide Images (WSI) with no patches, to improve the accuracy of the outputs. Finally, the models are applied to different classification problems, including binary and multi-class classifiers. Experimental results show that the accuracy of the VGG19 has the highest, 96\%, performance amongst all binary classes and multiclass classification. Our fine-tuned model demonstrates state-of-the-art performance on detecting malignancy of Osteosarcoma based on histologic images

    Exploring transfer learning in chest radiographic images within the interplay between COVID-19 and diabetes

    Get PDF
    The intricate relationship between COVID-19 and diabetes has garnered increasing attention within the medical community. Emerging evidence suggests that individuals with diabetes may experience heightened vulnerability to COVID-19 and, in some cases, develop diabetes as a post-complication following the viral infection. Additionally, it has been observed that patients taking cough medicine containing steroids may face an elevated risk of developing diabetes, further underscoring the complex interplay between these health factors. Based on previous research, we implemented deep-learning models to diagnose the infection via chest x-ray images in coronavirus patients. Three Thousand (3000) x-rays of the chest are collected through freely available resources. A council-certified radiologist discovered images demonstrating the presence of COVID-19 disease. Inception-v3, ShuffleNet, Inception-ResNet-v2, and NASNet-Large, four standard convoluted neural networks, were trained by applying transfer learning on 2,440 chest x-rays from the dataset for examining COVID-19 disease in the pulmonary radiographic images examined. The results depicted a sensitivity rate of 98 % (98%) and a specificity rate of almost nightly percent (90%) while testing those models with the remaining 2080 images. In addition to the ratios of model sensitivity and specificity, in the receptor operating characteristics (ROC) graph, we have visually shown the precision vs. recall curve, the confusion metrics of each classification model, and a detailed quantitative analysis for COVID-19 detection. An automatic approach is also implemented to reconstruct the thermal maps and overlay them on the lung areas that might be affected by COVID-19. The same was proven true when interpreted by our accredited radiologist. Although the findings are encouraging, more research on a broader range of COVID-19 images must be carried out to achieve higher accuracy values. The data collection, concept implementations (in MATLAB 2021a), and assessments are accessible to the testing group

    Deep learning in computed tomography pulmonary angiography imaging: a dual-pronged approach for pulmonary embolism detection

    Full text link
    The increasing reliance on Computed Tomography Pulmonary Angiography (CTPA) for Pulmonary Embolism (PE) diagnosis presents challenges and a pressing need for improved diagnostic solutions. The primary objective of this study is to leverage deep learning techniques to enhance the Computer Assisted Diagnosis (CAD) of PE. With this aim, we propose a classifier-guided detection approach that effectively leverages the classifier's probabilistic inference to direct the detection predictions, marking a novel contribution in the domain of automated PE diagnosis. Our classification system includes an Attention-Guided Convolutional Neural Network (AG-CNN) that uses local context by employing an attention mechanism. This approach emulates a human expert's attention by looking at both global appearances and local lesion regions before making a decision. The classifier demonstrates robust performance on the FUMPE dataset, achieving an AUROC of 0.927, sensitivity of 0.862, specificity of 0.879, and an F1-score of 0.805 with the Inception-v3 backbone architecture. Moreover, AG-CNN outperforms the baseline DenseNet-121 model, achieving an 8.1% AUROC gain. While previous research has mostly focused on finding PE in the main arteries, our use of cutting-edge object detection models and ensembling techniques greatly improves the accuracy of detecting small embolisms in the peripheral arteries. Finally, our proposed classifier-guided detection approach further refines the detection metrics, contributing new state-of-the-art to the community: mAP50_{50}, sensitivity, and F1-score of 0.846, 0.901, and 0.779, respectively, outperforming the former benchmark with a significant 3.7% improvement in mAP50_{50}. Our research aims to elevate PE patient care by integrating AI solutions into clinical workflows, highlighting the potential of human-AI collaboration in medical diagnostics.Comment: Published in Expert Systems With Application

    A Systematic Search over Deep Convolutional Neural Network Architectures for Screening Chest Radiographs

    Full text link
    Chest radiographs are primarily employed for the screening of pulmonary and cardio-/thoracic conditions. Being undertaken at primary healthcare centers, they require the presence of an on-premise reporting Radiologist, which is a challenge in low and middle income countries. This has inspired the development of machine learning based automation of the screening process. While recent efforts demonstrate a performance benchmark using an ensemble of deep convolutional neural networks (CNN), our systematic search over multiple standard CNN architectures identified single candidate CNN models whose classification performances were found to be at par with ensembles. Over 63 experiments spanning 400 hours, executed on a 11:3 FP32 TensorTFLOPS compute system, we found the Xception and ResNet-18 architectures to be consistent performers in identifying co-existing disease conditions with an average AUC of 0.87 across nine pathologies. We conclude on the reliability of the models by assessing their saliency maps generated using the randomized input sampling for explanation (RISE) method and qualitatively validating them against manual annotations locally sourced from an experienced Radiologist. We also draw a critical note on the limitations of the publicly available CheXpert dataset primarily on account of disparity in class distribution in training vs. testing sets, and unavailability of sufficient samples for few classes, which hampers quantitative reporting due to sample insufficiency.Comment: accepted in EMBC 2020, 4 pages+2 page Appendi

    Detecting COVID-19 in chest X-ray images

    Get PDF
    One reliable way of detecting coronavirus disease 2019 (COVID-19) is using a chest x-ray image due to its complications in the lung parenchyma. This paper proposes a solution for COVID-19 detection in chest x-ray images based on a convolutional neural network (CNN). This CNN-based solution is developed using a modified InceptionV3 as a backbone architecture. Self-attention layers are inserted to modify the backbone such that the number of trainable parameters is reduced and meaningful areas of COVID-19 in chest x-ray images are focused on a training process. The proposed CNN architecture is then learned to construct a model to classify COVID-19 cases from non-COVID-19 cases. It achieves sensitivity, specificity, and accuracy values of 93%, 96%, and 96%, respectively. The model is also further validated on the so-called other normal and abnormal, which are non-COVID-19 cases. Cases of other normal contain chest x-ray images of elderly patients with minimal fibrosis and spondylosis of the spine, whereas other abnormal cases contain chest x-ray images of tuberculosis, pneumonia, and pulmonary edema. The proposed solution could correctly classify them as non-COVID-19 with 92% accuracy. This is a practical scenario where non-COVID-19 cases could cover more than just a normal condition
    • …
    corecore