317 research outputs found

    Studies on Category Prediction of Ovarian Cancers Based on Magnetic Resonance Images

    Get PDF
    Ovarian cancer is the gynecological malignant tumor with low early diagnosis rate and high mortality. Ovarian epithelial cancer (OEC) is the most common subtype of ovarian cancer. Pathologically, OEC is divided into two subtypes: Type I and Type II. These two subtypes of OEC have different biological characteristics and treatment response. Therefore, it is important to accurately categorize these two groups of patients and provide the reference for clinicians in designing treatment plans. In the current magnetic resonance (MR) examination, the diagnoses given by the radiologists are largely based on individual judgment and not sufficiently accurate. Because of the low accuracy of the results and the risk of suffering Type II OEC, most patients will undertake the fine-needle aspiration, which may cause harm to patients’ bodies. Therefore, there is need for the method for OEC subtype classification based on MR images. This thesis proposes the automatic diagnosis system of ovarian cancer based on the combination of deep learning and radiomics. The method utilizes four common useful sequences for ovarian cancer diagnosis: sagittal fat-suppressed T2WI (Sag-fs-T2WI), coronal T2WI (Cor-T2WI), axial T1WI (Axi-T1WI), and apparent diffusion coefficient map (ADC) to establish a multi-sequence diagnostic model. The system starts with the segmentation of the ovarian tumors, and then obtains the radiomic features from lesion parts together with the network features. Selected Features are used to build model to predict the malignancy of ovarian cancers, the subtype of OEC and the survival condition. Bi-atten-ResUnet is proposed in this thesis as the segmentation model. The network is established on the basis of U-Net with adopting Residual block and non-local attention module. It preserves the classic encoder/decoder architecture in the U-Net network. The encoder part is reconstructed by the pretrained ResNet to make use of transfer learning knowledge, and bi-non-local attention modules are added to the decoder part on each level. The application of these techniques enhances the network’s performance in segmentation tasks. The model achieves 0.918, 0.905, 0.831, and 0.820 Dice coefficient respectively in segmenting on four MR sequences. After the segmentation work, the thesis proposes a diagnostic model with three steps: quantitative description feature extraction, feature selection, and establishment of prediction models. First, radiomic features and network features are obtained. Then iterative sparse representation (ISR) method is adopted as the feature selection to reduce the redundancy and correlation. The selected features are used to establish a predictive model, and support vector machine (SVM) is used as the classifier. The model achieves an AUC of 0.967 in distinguishing between benign and malignant ovarian tumors. For discriminating Type I and Type II OEC, the model yields an AUC of 0.823. In the survival prediction, patients categorized in high risk group are more likely to have poor prognosis with hazard ratio 4.169

    Current State-of-the-Art of AI Methods Applied to MRI

    Get PDF
    Di Noia, C., Grist, J. T., Riemer, F., Lyasheva, M., Fabozzi, M., Castelli, M., Lodi, R., Tonon, C., Rundo, L., & Zaccagna, F. (2022). Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics, 12(9), 1-16. [2125]. https://doi.org/10.3390/diagnostics12092125Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.publishersversionpublishe

    Integrated Graph Theoretic, Radiomics, and Deep Learning Framework for Personalized Clinical Diagnosis, Prognosis, and Treatment Response Assessment of Body Tumors

    Get PDF
    Purpose: A new paradigm is beginning to emerge in radiology with the advent of increased computational capabilities and algorithms. The future of radiological reading rooms is heading towards a unique collaboration between computer scientists and radiologists. The goal of computational radiology is to probe the underlying tissue using advanced algorithms and imaging parameters and produce a personalized diagnosis that can be correlated to pathology. This thesis presents a complete computational radiology framework (I GRAD) for personalized clinical diagnosis, prognosis and treatment planning using an integration of graph theory, radiomics, and deep learning. Methods: There are three major components of the I GRAD framework–image segmentation, feature extraction, and clinical decision support. Image Segmentation: I developed the multiparametric deep learning (MPDL) tissue signature model for segmentation of normal and abnormal tissue from multiparametric (mp) radiological images. The segmentation MPDL network was constructed from stacked sparse autoencoders (SSAE) with five hidden layers. The MPDL network parameters were optimized using k-fold cross-validation. In addition, the MPDL segmentation network was tested on an independent dataset. Feature Extraction: I developed the radiomic feature mapping (RFM) and contribution scattergram (CSg) methods for characterization of spatial and inter-parametric relationships in multiparametric imaging datasets. The radiomic feature maps were created by filtering radiological images with first and second order statistical texture filters followed by the development of standardized features for radiological correlation to biology and clinical decision support. The contribution scattergram was constructed to visualize and understand the inter-parametric relationships of the breast MRI as a complex network. This multiparametric imaging complex network was modeled using manifold learning and evaluated using graph theoretic analysis. Feature Integration: The different clinical and radiological features extracted from multiparametric radiological images and clinical records were integrated using a hybrid multiview manifold learning technique termed the Informatics Radiomics Integration System (IRIS). IRIS uses hierarchical clustering in combination with manifold learning to visualize the high-dimensional patient space on a two-dimensional heatmap. The heatmap highlights the similarity and dissimilarity between different patients and variables. Results: All the algorithms and techniques presented in this dissertation were developed and validated using breast cancer as a model for diagnosis and prognosis using multiparametric breast magnetic resonance imaging (MRI). The deep learning MPDL method demonstrated excellent dice similarity of 0.87±0.05 and 0.84±0.07 for segmentation of lesions on malignant and benign breast patients, respectively. Furthermore, each of the methods, MPDL, RFM, and CSg demonstrated excellent results for breast cancer diagnosis with area under the receiver (AUC) operating characteristic (ROC) curve of 0.85, 0.91, and 0.87, respectively. Furthermore, IRIS classified patients with low risk of breast cancer recurrence from patients with medium and high risk with an AUC of 0.93 compared to OncotypeDX, a 21 gene assay for breast cancer recurrence. Conclusion: By integrating advanced computer science methods into the radiological setting, the I-GRAD framework presented in this thesis can be used to model radiological imaging data in combination with clinical and histopathological data and produce new tools for personalized diagnosis, prognosis or treatment planning by physicians

    Multimodal Data Fusion and Quantitative Analysis for Medical Applications

    Get PDF
    Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Overall Survival Prediction of Glioma Patients With Multiregional Radiomics

    Get PDF
    Radiomics-guided prediction of overall survival (OS) in brain gliomas is seen as a significant problem in Neuro-oncology. The ultimate goal is to develop a robust MRI-based approach (i.e., a radiomics model) that can accurately classify a novel subject as a short-term survivor, a medium-term survivor, or a long-term survivor. The BraTS 2020 challenge provides radiological imaging and clinical data (178 subjects) to develop and validate radiomics-based methods for OS classification in brain gliomas. In this study, we empirically evaluated the efficacy of four multiregional radiomic models, for OS classification, and quantified the robustness of predictions to variations in automatic segmentation of brain tumor volume. More specifically, we evaluated four radiomic models, namely, the Whole Tumor (WT) radiomics model, the 3-subregions radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model. The 3-subregions radiomics model is based on a physiological segmentation of whole tumor volume (WT) into three non-overlapping subregions. The 6-subregions and 21-subregions radiomic models are based on an anatomical segmentation of the brain tumor into 6 and 21 anatomical regions, respectively. Moreover, we employed six segmentation schemes – five CNNs and one STAPLE-fusion method – to quantify the robustness of radiomic models. Our experiments revealed that the 3-subregions radiomics model had the best predictive performance (mean AUC = 0.73) but poor robustness (RSD = 1.99) and the 6-subregions and 21-subregions radiomics models were more robust (RSD  1.39) with lower predictive performance (mean AUC  0.71). The poor robustness of the 3-subregions radiomics model was associated with highly variable and inferior segmentation of tumor core and active tumor subregions as quantified by the Hausdorff distance metric (4.4−6.5mm) across six segmentation schemes. Failure analysis revealed that the WT radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model failed for the same subjects which is attributed to the common requirement of accurate segmentation of the WT volume. Moreover, short-term survivors were largely misclassified by the radiomic models and had large segmentation errors (average Hausdorff distance of 7.09mm). Lastly, we concluded that while STAPLE-fusion can reduce segmentation errors, it is not a solution to learning accurate and robust radiomic models
    • …
    corecore