20 research outputs found

    Role of machine learning in early diagnosis of kidney diseases.

    Get PDF
    Machine learning (ML) and deep learning (DL) approaches have been used as indispensable tools in modern artificial intelligence-based computer-aided diagnostic (AIbased CAD) systems that can provide non-invasive, early, and accurate diagnosis of a given medical condition. These AI-based CAD systems have proven themselves to be reproducible and have the generalization ability to diagnose new unseen cases with several diseases and medical conditions in different organs (e.g., kidneys, prostate, brain, liver, lung, breast, and bladder). In this dissertation, we will focus on the role of such AI-based CAD systems in early diagnosis of two kidney diseases, namely: acute rejection (AR) post kidney transplantation and renal cancer (RC). A new renal computer-assisted diagnostic (Renal-CAD) system was developed to precisely diagnose AR post kidney transplantation at an early stage. The developed Renal-CAD system perform the following main steps: (1) auto-segmentation of the renal allograft from surrounding tissues from diffusion weighted magnetic resonance imaging (DW-MRI) and blood oxygen level-dependent MRI (BOLD-MRI), (2) extraction of image markers, namely: voxel-wise apparent diffusion coefficients (ADCs) are calculated from DW-MRI scans at 11 different low and high b-values and then represented as cumulative distribution functions (CDFs) and extraction of the transverse relaxation rate (R2*) values from the segmented kidneys using BOLD-MRI scans at different echotimes, (3) integration of multimodal image markers with the associated clinical biomarkers, serum creatinine (SCr) and creatinine clearance (CrCl), and (4) diagnosing renal allograft status as nonrejection (NR) or AR by utilizing these integrated biomarkers and the developed deep learning classification model built on stacked auto-encoders (SAEs). Using a leaveone- subject-out cross-validation approach along with SAEs on a total of 30 patients with transplanted kidney (AR = 10 and NR = 20), the Renal-CAD system demonstrated 93.3% accuracy, 90.0% sensitivity, and 95.0% specificity in differentiating AR from NR. Robustness of the Renal-CAD system was also confirmed by the area under the curve value of 0.92. Using a stratified 10-fold cross-validation approach, the Renal-CAD system demonstrated its reproduciblity and robustness with a diagnostic accuracy of 86.7%, sensitivity of 80.0%, specificity of 90.0%, and AUC of 0.88. In addition, a new renal cancer CAD (RC-CAD) system for precise diagnosis of RC at an early stage was developed, which incorporates the following main steps: (1) estimating the morphological features by applying a new parametric spherical harmonic technique, (2) extracting appearance-based features, namely: first order textural features are calculated and second order textural features are extracted after constructing the graylevel co-occurrence matrix (GLCM), (3) estimating the functional features by constructing wash-in/wash-out slopes to quantify the enhancement variations across different contrast enhanced computed tomography (CE-CT) phases, (4) integrating all the aforementioned features and modeling a two-stage multilayer perceptron artificial neural network (MLPANN) classifier to classify the renal tumor as benign or malignant and identify the malignancy subtype. On a total of 140 RC patients (malignant = 70 patients (ccRCC = 40 and nccRCC = 30) and benign angiomyolipoma tumors = 70), the developed RC-CAD system was validated using a leave-one-subject-out cross-validation approach. The developed RC-CAD system achieved a sensitivity of 95.3% ± 2.0%, a specificity of 99.9% ± 0.4%, and Dice similarity coefficient of 0.98 ± 0.01 in differentiating malignant from benign renal tumors, as well as an overall accuracy of 89.6% ± 5.0% in the sub-typing of RCC. The diagnostic abilities of the developed RC-CAD system were further validated using a randomly stratified 10-fold cross-validation approach. The results obtained using the proposed MLP-ANN classification model outperformed other machine learning classifiers (e.g., support vector machine, random forests, and relational functional gradient boosting) as well as other different approaches from the literature. In summary, machine and deep learning approaches have shown potential abilities to be utilized to build AI-based CAD systems. This is evidenced by the promising diagnostic performance obtained by both Renal-CAD and RC-CAD systems. For the Renal- CAD, the integration of functional markers extracted from multimodal MRIs with clinical biomarkers using SAEs classification model, potentially improved the final diagnostic results evidenced by high accuracy, sensitivity, and specificity. The developed Renal-CAD demonstrated high feasibility and efficacy for early, accurate, and non-invasive identification of AR. For the RC-CAD, integrating morphological, textural, and functional features extracted from CE-CT images using a MLP-ANN classification model eventually enhanced the final results in terms of accuracy, sensitivity, and specificity, making the proposed RC-CAD a reliable noninvasive diagnostic tool for RC. The early and accurate diagnosis of AR or RC will help physicians to provide early intervention with the appropriate treatment plan to prolong the life span of the diseased kidney, increase the survival chance of the patient, and thus improve the healthcare outcome in the U.S. and worldwide

    Handcrafted histological transformer (H2T):unsupervised representation of whole slide images

    Get PDF
    Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Multi-Magnification Search in Digital Pathology

    Get PDF
    This research study investigates the effect of magnification on content-based image search in digital pathology archives and proposes to use multi-magnification image representation. Image search in large archives of digital pathology slides provides researchers and medical professionals with an opportunity to match records of current and past patients and learn from evidently diagnosed and treated cases. When working with microscopes, pathologists switch between different magnification levels while examining tissue specimens to find and evaluate various morphological features. Inspired by the conventional pathology workflow, this thesis investigates several magnification levels in digital pathology and their combinations to minimize the gap between AI-enabled image search methods and clinical settings. This thesis suggests two approaches for combining magnification levels and compares their performance. The first approach obtains a single-vector deep feature representation for a WSI, whereas the second approach works with a multi-vector deep feature representation. The proposed content-based searching framework does not rely on any pixel-level annotation and potentially applies to millions of unlabelled (raw) WSIs. This thesis proposes using binary masks generated by U-Net as the primary step of patch preparation to locating tissue regions in a WSI. As a part of this thesis, a multi-magnification dataset of histopathology patches is created by applying the proposed patch preparation method on more than 8,000 WSIs of TCGA repository. The performance of both MMS methods is evaluated by investigating the top three most similar WSIs to a query WSI found by the search. The search is considered successful if two out of three matched cases have the same malignancy subtype as the query WSI. Experimental search results across tumors of several anatomical sites at different magnification levels, i.e., 20×, 10×, and 5× magnifications and their combinations, are reported in this thesis. The experiments verify that cell-level information at the highest magnification is essential for searching for diagnostic purposes. In contrast, low-magnification information may improve this assessment depending on the tumor type. Both proposed search methods generally performed more accurately at 20× magnification or the combination of the 20× magnification with 10×, 5×, or both. The multi-magnification searching approach achieved up to 11% increase in F1-score for searching among some tumor types, including the urinary tract and brain tumor subtypes compared to the single-magnification image search

    Out-of-Distribution Generalization of Gigapixel Image Representation

    Get PDF
    This thesis addresses the significant challenge of improving the generalization capabilities of artificial deep neural networks in the classification of whole slide images (WSIs) in histopathology across different and unseen hospitals. It is a critical issue in AI applications for vision-based healthcare tasks, given that current standard methodologies struggle with out-of-distribution (OOD) data from varying hospital sources. In histopathology, distribution shifts can arise due to image acquisition variances across different scanner vendors, differences in laboratory routines and staining procedures, and diversity in patient demographics. This work investigates two critical forms of generalization within histopathology: magnification generalization and OOD generalization towards different hospitals. One chapter of this thesis is dedicated to the exploration of magnification generalization, acknowledging the variability in histopathological images due to distinct magnification levels and seeking to enhance the model's robustness by learning invariant features across these levels. However, the major part of this work focuses on OOD generalization, specifically unseen hospital data. The objective is to leverage knowledge encapsulated in pre-existing models to help new models adapt to diverse data scenarios and ensure their efficient operation in different hospital environments. Additionally, the concept of Hospital-Agnostic (HA) learning regimes is introduced, focusing on invariant characteristics across hospitals and aiming to establish a learning model that sustains stable performance in varied hospital settings. The culmination of this research introduces a comprehensive method, termed ALFA (Exploiting All Levels of Feature Abstraction), that not only considers invariant features across hospitals but also extracts a broader set of features from input images, thus maximizing the model's generalization potential. The findings of this research are expected to have significant implications for the deployment of medical image classification systems using deep models in clinical settings. The proposed methods allow for more accurate and reliable diagnostic support across various hospital environments, thereby improving diagnostic accuracy and reliability, and paving the way for enhanced generalization in histopathology diagnostics using deep learning techniques. Future research directions may build on expanding these investigations to further improve generalization in histopathology

    Domain Generalization in Computational Pathology: Survey and Guidelines

    Full text link
    Deep learning models have exhibited exceptional effectiveness in Computational Pathology (CPath) by tackling intricate tasks across an array of histology image analysis applications. Nevertheless, the presence of out-of-distribution data (stemming from a multitude of sources such as disparate imaging devices and diverse tissue preparation methods) can cause \emph{domain shift} (DS). DS decreases the generalization of trained models to unseen datasets with slightly different data distributions, prompting the need for innovative \emph{domain generalization} (DG) solutions. Recognizing the potential of DG methods to significantly influence diagnostic and prognostic models in cancer studies and clinical practice, we present this survey along with guidelines on achieving DG in CPath. We rigorously define various DS types, systematically review and categorize existing DG approaches and resources in CPath, and provide insights into their advantages, limitations, and applicability. We also conduct thorough benchmarking experiments with 28 cutting-edge DG algorithms to address a complex DG problem. Our findings suggest that careful experiment design and CPath-specific Stain Augmentation technique can be very effective. However, there is no one-size-fits-all solution for DG in CPath. Therefore, we establish clear guidelines for detecting and managing DS depending on different scenarios. While most of the concepts, guidelines, and recommendations are given for applications in CPath, we believe that they are applicable to most medical image analysis tasks as well.Comment: Extended Versio

    Radiomics using computed tomography to predict CD73 expression and prognosis of colorectal cancer liver metastases

    Get PDF
    ABSTRACT: Background Finding a noninvasive radiomic surrogate of tumor immune features could help identify patients more likely to respond to novel immune checkpoint inhibitors. Particularly, CD73 is an ectonucleotidase that cata- lyzes the breakdown of extracellular AMP into immunosuppressive adenosine, which can be blocked by therapeutic antibodies. High CD73 expression in colorectal cancer liver metastasis (CRLM) resected with curative intent is associ- ated with early recurrence and shorter patient survival. The aim of this study was hence to evaluate whether machine learning analysis of preoperative liver CT-scan could estimate high vs low CD73 expression in CRLM and whether such radiomic score would have a prognostic significance. Methods We trained an Attentive Interpretable Tabular Learning (TabNet) model to predict, from preoperative CT images, stratified expression levels of CD73 (CD73High vs. CD73Low ) assessed by immunofluorescence (IF) on tissue microarrays. Radiomic features were extracted from 160 segmented CRLM of 122 patients with matched IF data, preprocessed and used to train the predictive model. We applied a five-fold cross-validation and validated the perfor- mance on a hold-out test set. Results TabNet provided areas under the receiver operating characteristic curve of 0.95 (95% CI 0.87 to 1.0) and 0.79 (0.65 to 0.92) on the training and hold-out test sets respectively, and outperformed other machine learning models. The TabNet-derived score, termed rad-CD73, was positively correlated with CD73 histological expression in matched CRLM (Spearman’s ρ = 0.6004; P < 0.0001). The median time to recurrence (TTR) and disease-specific survival (DSS) after CRLM resection in rad-CD73High vs rad-CD73 Low patients was 13.0 vs 23.6 months (P = 0.0098) and 53.4 vs 126.0 months (P = 0.0222), respectively. The prognostic value of rad-CD73 was independent of the standard clinical risk score, for both TTR (HR = 2.11, 95% CI 1.30 to 3.45, P < 0.005) and DSS (HR = 1.88, 95% CI 1.11 to 3.18, P = 0.020)

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    CAD system for early diagnosis of diabetic retinopathy based on 3D extracted imaging markers.

    Get PDF
    This dissertation makes significant contributions to the field of ophthalmology, addressing the segmentation of retinal layers and the diagnosis of diabetic retinopathy (DR). The first contribution is a novel 3D segmentation approach that leverages the patientspecific anatomy of retinal layers. This approach demonstrates superior accuracy in segmenting all retinal layers from a 3D retinal image compared to current state-of-the-art methods. It also offers enhanced speed, enabling potential clinical applications. The proposed segmentation approach holds great potential for supporting surgical planning and guidance in retinal procedures such as retinal detachment repair or macular hole closure. Surgeons can benefit from the accurate delineation of retinal layers, enabling better understanding of the anatomical structure and more effective surgical interventions. Moreover, real-time guidance systems can be developed to assist surgeons during procedures, improving overall patient outcomes. The second contribution of this dissertation is the introduction of a novel computeraided diagnosis (CAD) system for precise identification of diabetic retinopathy. The CAD system utilizes 3D-OCT imaging and employs an innovative approach that extracts two distinct features: first-order reflectivity and 3D thickness. These features are then fused and used to train and test a neural network classifier. The proposed CAD system exhibits promising results, surpassing other machine learning and deep learning algorithms commonly employed in DR detection. This demonstrates the effectiveness of the comprehensive analysis approach employed by the CAD system, which considers both low-level and high-level data from the 3D retinal layers. The CAD system presents a groundbreaking contribution to the field, as it goes beyond conventional methods, optimizing backpropagated neural networks to integrate multiple levels of information effectively. By achieving superior performance, the proposed CAD system showcases its potential in accurately diagnosing DR and aiding in the prevention of vision loss. In conclusion, this dissertation presents novel approaches for the segmentation of retinal layers and the diagnosis of diabetic retinopathy. The proposed methods exhibit significant improvements in accuracy, speed, and performance compared to existing techniques, opening new avenues for clinical applications and advancements in the field of ophthalmology. By addressing future research directions, such as testing on larger datasets, exploring alternative algorithms, and incorporating user feedback, the proposed methods can be further refined and developed into robust, accurate, and clinically valuable tools for diagnosing and monitoring retinal diseases

    Learning Discriminative Representations for Gigapixel Images

    Get PDF
    Digital images of tumor tissue are important diagnostic and prognostic tools for pathologists. Recent advancement in digital pathology has led to an abundance of digitized histopathology slides, called whole-slide images. Computational analysis of whole-slide images is a challenging task as they are generally gigapixel files, often one or more gigabytes in size. However, these computational methods provide a unique opportunity to improve the objectivity and accuracy of diagnostic interpretations in histopathology. Recently, deep learning has been successful in characterizing images for vision-based applications in multiple domains. But its applications are relatively less explored in the histopathology domain mostly due to the following two challenges. Firstly, there is difficulty in scaling deep learning methods for processing large gigapixel histopathology images. Secondly, there is a lack of diversified and labeled datasets due to privacy constraints as well as workflow and technical challenges in the healthcare sector. The main goal of this dissertation is to explore and develop deep models to learn discriminative representations of whole slide images while overcoming the existing challenges. A three-staged approach was considered in this research. In the first stage, a framework called Yottixel is proposed. It represents a whole-slide image as a set of multiple representative patches, called mosaic. The mosaic enables convenient processing and compact representation of an entire high-resolution whole-slide image. Yottixel allows faster retrieval of similar whole-slide images within large archives of digital histopathology images. Such retrieval technology enables pathologists to tap into the past diagnostic data on demand. Yottixel is validated on the largest public archive of whole-slide images (The Cancer Genomic Atlas), achieving promising results. Yottixel is an unsupervised method that limits its performance on specific tasks especially when the labeled (or partially labeled) dataset can be available. In the second stage, multi-instance learning (MIL) is used to enhance the cancer subtype prediction through weakly-supervised training. Three MIL methods have been proposed, each improving upon the previous one. The first one is based on memory-based models, the second uses attention-based models, and the third one uses graph neural networks. All three methods are incorporated in Yottixel to classify entire whole-slide images with no pixel-level annotations. Access to large-scale and diversified datasets is a primary driver of the advancement and adoption of machine learning technologies. However, healthcare has many restrictive rules around data sharing, limiting research and model development. In the final stage, a federated learning scheme called ProxyFL is developed that enables collaborative training of Yottixel among the multiple healthcare organizations without centralization of the sensitive medical data. The combined research in all the three stages of the Ph.D. has resulted in the development of a holistic and practical framework for learning discriminative and compact representations of whole-slide images in digital pathology
    corecore