246 research outputs found

    Deep learning-based prediction of response to HER2-targeted neoadjuvant chemotherapy from pre-treatment dynamic breast MRI: A multi-institutional validation study

    Full text link
    Predicting response to neoadjuvant therapy is a vexing challenge in breast cancer. In this study, we evaluate the ability of deep learning to predict response to HER2-targeted neo-adjuvant chemotherapy (NAC) from pre-treatment dynamic contrast-enhanced (DCE) MRI acquired prior to treatment. In a retrospective study encompassing DCE-MRI data from a total of 157 HER2+ breast cancer patients from 5 institutions, we developed and validated a deep learning approach for predicting pathological complete response (pCR) to HER2-targeted NAC prior to treatment. 100 patients who received HER2-targeted neoadjuvant chemotherapy at a single institution were used to train (n=85) and tune (n=15) a convolutional neural network (CNN) to predict pCR. A multi-input CNN leveraging both pre-contrast and late post-contrast DCE-MRI acquisitions was identified to achieve optimal response prediction within the validation set (AUC=0.93). This model was then tested on two independent testing cohorts with pre-treatment DCE-MRI data. It achieved strong performance in a 28 patient testing set from a second institution (AUC=0.85, 95% CI 0.67-1.0, p=.0008) and a 29 patient multicenter trial including data from 3 additional institutions (AUC=0.77, 95% CI 0.58-0.97, p=0.006). Deep learning-based response prediction model was found to exceed a multivariable model incorporating predictive clinical variables (AUC < .65 in testing cohorts) and a model of semi-quantitative DCE-MRI pharmacokinetic measurements (AUC < .60 in testing cohorts). The results presented in this work across multiple sites suggest that with further validation deep learning could provide an effective and reliable tool to guide targeted therapy in breast cancer, thus reducing overtreatment among HER2+ patients.Comment: Braman and El Adoui contributed equally to this work. 33 pages, 3 figures in main tex

    Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI

    Get PDF
    This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction. Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria. Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019

    Towards generalizable machine learning models for computer-aided diagnosis in medicine

    Get PDF
    Hidden stratification represents a phenomenon in which a training dataset contains unlabeled (hidden) subsets of cases that may affect machine learning model performance. Machine learning models that ignore the hidden stratification phenomenon--despite promising overall performance measured as accuracy and sensitivity--often fail at predicting the low prevalence cases, but those cases remain important. In the medical domain, patients with diseases are often less common than healthy patients, and a misdiagnosis of a patient with a disease can have significant clinical impacts. Therefore, to build a robust and trustworthy CAD system and a reliable treatment effect prediction model, we cannot only pursue machine learning models with high overall accuracy, but we also need to discover any hidden stratification in the data and evaluate the proposing machine learning models with respect to both overall performance and the performance on certain subsets (groups) of the data, such as the ‘worst group’. In this study, I investigated three approaches for data stratification: a novel algorithmic deep learning (DL) approach that learns similarities among cases and two schema completion approaches that utilize domain expert knowledge. I further proposed an innovative way to integrate the discovered latent groups into the loss functions of DL models to allow for better model generalizability under the domain shift scenario caused by the data heterogeneity. My results on lung nodule Computed Tomography (CT) images and breast cancer histopathology images demonstrate that learning homogeneous groups within heterogeneous data significantly improves the performance of the computer-aided diagnosis (CAD) system, particularly for low-prevalence or worst-performing cases. This study emphasizes the importance of discovering and learning the latent stratification within the data, as it is a critical step towards building ML models that are generalizable and reliable. Ultimately, this discovery can have a profound impact on clinical decision-making, particularly for low-prevalence cases

    Evaluation of cancer outcome assessment using MRI: A review of deep-learning methods

    Full text link
    Accurate evaluation of tumor response to treatment is critical to allow personalized treatment regimens according to the predicted response and to support clinical trials investigating new therapeutic agents by providing them with an accurate response indicator. Recent advances in medical imaging, computer hardware, and machine-learning algorithms have resulted in the increased use of these tools in the field of medicine as a whole and specifically in cancer imaging for detection and characterization of malignant lesions, prognosis, and assessment of treatment response. Among the currently available imaging techniques, magnetic resonance imaging (MRI) plays an important role in the evaluation of treatment assessment of many cancers, given its superior soft-tissue contrast and its ability to allow multiplanar imaging and functional evaluation. In recent years, deep learning (DL) has become an active area of research, paving the way for computer-assisted clinical and radiological decision support. DL can uncover associations between imaging features that cannot be visually identified by the naked eye and pertinent clinical outcomes. The aim of this review is to highlight the use of DL in the evaluation of tumor response assessed on MRI. In this review, we will first provide an overview of common DL architectures used in medical imaging research in general. Then, we will review the studies to date that have applied DL to magnetic resonance imaging for the task of treatment response assessment. Finally, we will discuss the challenges and opportunities of using DL within the clinical workflow

    Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions

    Full text link
    Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in the deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. In this paper, we provide an extensive survey of deep learning-based breast cancer imaging research, covering studies on mammogram, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods, publicly available datasets, and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are described in detail. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.Comment: Survey, 41 page

    Recent advancement in Disease Diagnostic using machine learning: Systematic survey of decades, comparisons, and challenges

    Full text link
    Computer-aided diagnosis (CAD), a vibrant medical imaging research field, is expanding quickly. Because errors in medical diagnostic systems might lead to seriously misleading medical treatments, major efforts have been made in recent years to improve computer-aided diagnostics applications. The use of machine learning in computer-aided diagnosis is crucial. A simple equation may result in a false indication of items like organs. Therefore, learning from examples is a vital component of pattern recognition. Pattern recognition and machine learning in the biomedical area promise to increase the precision of disease detection and diagnosis. They also support the decision-making process's objectivity. Machine learning provides a practical method for creating elegant and autonomous algorithms to analyze high-dimensional and multimodal bio-medical data. This review article examines machine-learning algorithms for detecting diseases, including hepatitis, diabetes, liver disease, dengue fever, and heart disease. It draws attention to the collection of machine learning techniques and algorithms employed in studying conditions and the ensuing decision-making process

    convoHER2: A Deep Neural Network for Multi-Stage Classification of HER2 Breast Cancer

    Full text link
    Generally, human epidermal growth factor 2 (HER2) breast cancer is more aggressive than other kinds of breast cancer. Currently, HER2 breast cancer is detected using expensive medical tests are most expensive. Therefore, the aim of this study was to develop a computational model named convoHER2 for detecting HER2 breast cancer with image data using convolution neural network (CNN). Hematoxylin and eosin (H&E) and immunohistochemical (IHC) stained images has been used as raw data from the Bayesian information criterion (BIC) benchmark dataset. This dataset consists of 4873 images of H&E and IHC. Among all images of the dataset, 3896 and 977 images are applied to train and test the convoHER2 model, respectively. As all the images are in high resolution, we resize them so that we can feed them in our convoHER2 model. The cancerous samples images are classified into four classes based on the stage of the cancer (0+, 1+, 2+, 3+). The convoHER2 model is able to detect HER2 cancer and its grade with accuracy 85% and 88% using H&E images and IHC images, respectively. The outcomes of this study determined that the HER2 cancer detecting rates of the convoHER2 model are much enough to provide better diagnosis to the patient for recovering their HER2 breast cancer in future

    Breast dynamic contrast-enhanced-magnetic resonance imaging and radiomics: State of art

    Get PDF
    Breast cancer represents the most common malignancy in women, being one of the most frequent cause of cancer-related mortality. Ultrasound, mammography, and magnetic resonance imaging (MRI) play a pivotal role in the diagnosis of breast lesions, with different levels of accuracy. Particularly, dynamic contrast-enhanced MRI has shown high diagnostic value in detecting multifocal, multicentric, or contralateral breast cancers. Radiomics is emerging as a promising tool for quantitative tumor evaluation, allowing the extraction of additional quantitative data from radiological imaging acquired with different modalities. Radiomics analysis may provide novel information through the quantification of lesions heterogeneity, that may be relevant in clinical practice for the characterization of breast lesions, prediction of tumor response to systemic therapies and evaluation of prognosis in patients with breast cancers. Several published studies have explored the value of radiomics with good-to-excellent diagnostic and prognostic performances for the evaluation of breast lesions. Particularly, the integrations of radiomics data with other clinical and histopathological parameters have demonstrated to improve the prediction of tumor aggressiveness with high accuracy and provided precise models that will help to guide clinical decisions and patients management. The purpose of this article in to describe the current application of radiomics in breast dynamic contrast-enhanced MRI

    Machine Learning Approaches for Healthcare Analysis

    Get PDF
    Machine learning (ML)is a division of artificial intelligence that teaches computers how to discover difficult-to-distinguish patterns from huge or complex data sets and learn from previous cases by utilizing a range of statistical, probabilistic, data processing, and optimization methods. Nowadays, ML plays a vital role in many fields, such as finance, self-driving cars, image processing, medicine, and Speech recognition. In healthcare, ML has been used in applications such as the detection, prognosis, diagnosis, and treatment of diseases due to Its capability to handle large data. Moreover, ML has exceptional abilities to predict disease by uncovering patterns from medical datasets. Machine learning and deep learning are better suited for analyzing medical datasets than traditional methods because of the nature of these datasets. They are mostly large and complex heterogeneous data coming from different sources, requiring more efficient computational techniques to handle them. This dissertation presents several machine-learning techniques to tackle medical issues such as data imbalance, classification and upgrading tumor stages, and multi-omics integration. In the second chapter, we introduce a novel method to handle class-imbalanced dilemmas, a common issue in bioinformatics datasets. In class-imbalanced data, the number of samples in each class is unequal. Since most data sets contain usual versus unusual cases, e.g., cancer versus normal or miRNAs versus other noncoding RNA, the minority class with the least number of samples is the interesting class that contains the unusual cases. The learning models based on the standard classifiers, such as the support vector machine (SVM), random forest, and k-NN, are usually biased towards the majority class, which means that the classifier is most likely to predict the samples from the interesting class inaccurately. Thus, handling class-imbalanced datasets has gained researchers’ interest recently. A combination of proper feature selection, a cost-sensitive classifier, and ensembling based on the random forest method (BCECSC-RF) is proposed to handle the class-imbalanced data. Random class-balanced ensembles are built individually. Then, each ensemble is used as a training pool to classify the remaining out-bagged samples. Samples in each ensemble will be classified using a class-sensitive classifier incorporating random forest. The sample will be classified by selecting the most often class that has been voted for in all sample appearances in all the formed ensembles. A set of performance measurements, including a geometric measurement, suggests that the model can improve the classification of the minority class samples. In the third chapter, we introduce a novel study to predict the upgrading of the Gleason score on confirmatory magnetic resonance imaging-guided targeted biopsy (MRI-TB) of the prostate in candidates for active surveillance based on clinical features. MRI of the prostate is not accessible to many patients due to difficulty contacting patients, insurance denials, and African-American patients are disproportionately affected by barriers to MRI of the prostate during Active surveillance [6,7]. Modeling clinical variables with advanced methods, such as machine learning, could allow us to manage patients in resource-limited environments with limited technological access. Upgrading to significant prostate cancer on MRI-TB was defined as upgrading to G 3+4 (definition 1 - DF1) and 4+3 (DF2). For upgrading prediction, the AdaBoost model was highly predictive of upgrading DF1 (AUC 0.952), while for prediction of upgrading DF2, the Random Forest model had a lower but excellent prediction performance (AUC 0.947). In the fourth chapter, we introduce a multi-omics data integration method to analyze multi-omics data for biomedical applications, including disease prediction, disease subtypes, biomarker prediction, and others. Multi-omics data integration facilitates collecting richer understanding and perceptions than separate omics data. Our method is constructed using the combination of gene similarity network (GSN) based on Uniform Manifold Approximation and Projection (UMAP) and convolutional neural networks (CNNs). The method utilizes UMAP to embed gene expression, DNA methylation, and copy number alteration (CNA) to a lower dimension creating two-dimensional RGB images. Gene expression is used as a reference to construct the GSN and then integrate other omics data with the gene expression for better prediction. We used CNNs to predict the Gleason score levels of prostate cancer patients and the tumor stage in breast cancer patients. The results show that UMAP as an embedding technique can better integrate multi-omics maps into the prediction model than SO
    • …
    corecore