27 research outputs found

    Wavelet-Based Harmonization of Local and Global Model Shifts in Federated Learning for Histopathological Images

    Get PDF
    Federated Learning (FL) is a promising machine learning approach for development of data-driven global model using collaborative local models across multiple local institutions. However, the heterogeneity of medical imaging data is one of the challenges within FL. This heterogeneity is caused by the variation in imaging scanner protocols across institutions, which may result in weight shift among local models leading to deterioration in predictive accuracy of global model. The prevailing approaches involve applying different FL averaging techniques to enhance the performance of the global model, ignoring the distinct imaging features of the local domain. In this work, we address both the local and global model weight shift by introducing multiscale amplitude harmonization of the imaging in the local models utilizing Haar and harmonic wavelets. First, we tackle the local model weight shift by transforming the image feature space into multiscale frequency space using multiscale based harmonization. This aims to achieve harmonized image feature space across local models. Second, based on harmonized image feature space, a weighted regularization term is applied to local models, effectively mitigating weight shifts within these models. This weighted regularization assists in managing global model shifts by aggregating the optimized local models. We evaluate the proposed method using publicly available histopathological dataset MoNuSAC2018, TNBC for nuclei segmentation, and Camelyon17 dataset for tumor tissue classification. The average testing accuracies are 96.55%, and 92.47% for classification of tumor tissue while Dice co-efficients are 84.33%, and 84.46% for segmentation of nuclei with Haar and harmonic multiscale based harmonization, respectively. The comparison results for nuclei segmentation and tumor tissue classification using histopathological data show that our proposed methods perform better than the state-of-the-art FL methods

    Radiomic Texture Feature Descriptor to Distinguish Recurrent Brain Tumor From Radiation Necrosis Using Multimodal MRI

    Get PDF
    Despite multimodal aggressive treatment with chemo-radiation-therapy, and surgical resection, Glioblastoma Multiforme (GBM) may recur which is known as recurrent brain tumor (rBT), There are several instances where benign and malignant pathologies might appear very similar on radiographic imaging. One such illustration is radiation necrosis (RN) (a moderately benign impact of radiation treatment) which are visually almost indistinguishable from rBT on structural magnetic resonance imaging (MRI). There is hence a need for identification of reliable non-invasive quantitative measurements on routinely acquired brain MRI scans: pre-contrast T1-weighted (T1), post-contrast T1-weighted (T1Gd), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR) that can accurately distinguish rBT from RN. In this work, sophisticated radiomic texture features are used to distinguish rBT from RN on multimodal MRI for disease characterization. First, stochastic multiresolution radiomic descriptor that captures voxel-level textural and structural heterogeneity as well as intensity and histogram features are extracted. Subsequently, these features are used in a machine learning setting to characterize the rBT from RN from four sequences of the MRI with 155 imaging slices for 30 GBM cases (12 RN, 18 rBT). To reduce the bias in accuracy estimation our model is implemented using Leave-one-out crossvalidation (LOOCV) and stratified 5-fold cross-validation with a Random Forest classifier. Our model offers mean accuracy of 0.967 ± 0.180 for LOOCV and 0.933 ± 0.082 for stratified 5-fold cross-validation using multiresolution texture features for discrimination of rBT from RN in this study. Our findings suggest that sophisticated texture feature may offer better discrimination between rBT and RN in MRI compared to other works in the literature

    Opioid Use Disorder Prediction Using Machine Learning of fMRI Data

    Get PDF
    According to the Centers for Disease Control and Prevention (CDC) more than 932,000 people in the US have died since 1999 from a drug overdose. Just about 75% of drug overdose deaths in 2020 involved Opioid, which suggests that the US is in an Opioid overdose epidemic. Identifying individuals likely to develop Opioid use disorder (OUD) can help public health in planning effective prevention, intervention, drug overdose and recovery policies. Further, a better understanding of prediction of overdose leading to the neurobiology of OUD may lead to new therapeutics. In recent years, very limited work has been done using statistical analysis of functional magnetic resonance imaging (fMRI) methods to analyze the neurobiology of Opioid addictions in humans. In this work, for the first time in the literature, we propose a machine learning (ML) framework to predict OUD users utilizing clinical fMRI-BOLD (Blood oxygen level dependent) signal from OUD users and healthy controls (HC). We first obtain the features and validate these with those extracted from selected brain subcortical areas identified in our previous statistical analysis of the fMRI-BOLD signal discriminating OUD subjects from that of the HC. The selected features from three representative brain areas such as default mode network (DMN), salience network (SN), and executive control network (ECN) for both OUD participants and HC subjects are then processed for OUD and HC subjects’ prediction. Our leave one out cross validated results with sixty-nine OUD and HC cases show 88.40% prediction accuracies. These results suggest that the proposed techniques may be utilized to gain a greater understanding of the neurobiology of OUD leading to novel therapeutic development

    Uncertainty Estimation in Classification of MGNT Using Radiogenomics for Glioblastoma Patients

    Get PDF
    Glioblastoma Multiforme (GBM) is one of the most malignant brain tumors among all high-grade brain cancers. Temozolomide (TMZ) is the first-line chemotherapeutic regimen for glioblastoma patients. The methylation status of the O6-methylguanine-DNA-methyltransferase (MGMT) gene is a prognostic biomarker for tumor sensitivity to TMZ chemotherapy. However, the standardized procedure for assessing the methylation status of MGMT is an invasive surgical biopsy, and accuracy is susceptible to resection sample and heterogeneity of the tumor. Recently, radio-genomics which associates radiological image phenotype with genetic or molecular mutations has shown promise in the non-invasive assessment of radiotherapeutic treatment. This study proposes a machine-learning framework for MGMT classification with uncertainty analysis utilizing imaging features extracted from multimodal magnetic resonance imaging (mMRI). The imaging features include conventional texture, volumetric, and sophisticated fractal, and multi-resolution fractal texture features. The proposed method is evaluated with publicly available BraTS-TCIA-GBM pre-operative scans and TCGA datasets with 114 patients. The experiment with 10-fold cross-validation suggests that the fractal and multi-resolution fractal texture features offer an improved prediction of MGMT status. The uncertainty analysis using an ensemble of Stochastic Gradient Langevin Boosting models along with multi-resolution fractal features offers an accuracy of 71.74% and area under the curve of 0.76. Finally, analysis shows that our proposed method with uncertainty analysis offers improved predictive performance when compared with different well-known methods in the literature

    Domain Adaptive Federated Learning for Multi-Institution Molecular Mutation Prediction and Bias Identification

    Get PDF
    Deep learning models have shown potential in medical image analysis tasks. However, training a generalized deep learning model requires huge amounts of patient data that is usually gathered from multiple institutions which may raise privacy concerns. Federated learning (FL) provides an alternative to sharing data across institutions. Nonetheless, FL is susceptible to a few challenges including inversion attacks on model weights, heterogenous data distributions, and bias. This study addresses heterogeneity and bias issues for multi-institution patient data by proposing domain adaptive FL modeling using several radiomics (volume, fractal, texture) features for O6-methylguanine-DNA methyltransferase (MGMT) classification across multiple institutions. The proposed domain adaptive FL MGMT classification inherently offers differential privacy (DP) for the patient data. For domain adaptation two techniques e.g., mixture of experts (ME) with a gating network and adversarial alignment are used for comparison. The proposed method is evaluated using publicly available multi-institution (UPENN-GBM, UCSF-PDGM, RSNA-ASNR-MICCAI BraTS-2021) data set with a total of 1007 patients. Our experiments with 5-fold cross validation suggest that domain adaptive FL offers improved performance with a mean accuracy of 69.93% ± 4.8 % and area under curve of 0.655 ± 0.055 across multiple institutions. In addition, further analysis of probability density of gating network for domain adaptive FL identifies the institution that may bias the global model prediction due to increased heterogeneity for a given input. Our comparison analysis shows that the proposed method with bias identification offers the best predictive performance when compared to different commonly employed FL and baseline methods in the literature

    Initial Implementation of a Machine Learning System for SRF Cavity Fault Classification at CEBAF

    Get PDF
    The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Laboratory is a high power Continuous Wave (CW) electron accelerator. It uses a mixture of of SRF cryomodules: older, lower energy C20/C50 modules and newer, higher energy C100 modules. The cryomodules are arrayed in two anti-parallel linear accelerators. Accurately classifying the type of cavity faults is essential to maintaining and improving accelerator performance. Each C100 cryomodule contains eight 7-cell cavities. When a cavity fault occurs within a cryomodule, all eight cavities generate 17 waveforms each containing 8192 points. This data is exported from the control system and saved for review. Analysis of these waveforms is time intensive and requires a subject matter expert (SME). SMEs examine the data from each event and label it according to one of several known cavity fault types. Multiple machine learning models have been developed on this labeled dataset with sufficient performance to warrant the creation of a limited machine learning software system for use by accelerator operations staff. This paper discusses the transition from model development to implementation of a prototype system

    Adaptive Critic Network for Person Tracking Using 3D Skeleton Data

    Get PDF
    Analysis of human gait using 3-dimensional co-occurrence skeleton joints extracted from Lidar sensor data has been shown a viable method for predicting person identity. The co-occurrence based networks rely on the spatial changes between frames of each joint in the skeleton data sequence. Normally, this data is obtained using a Lidar skeleton extraction method to estimate these co-occurrence features from raw Lidar frames, which can be prone to incorrect joint estimations when part of the body is occluded. These datasets can also be time consuming and expensive to collect and typically offer a small number of samples for training and testing network models. The small number of samples and occlusion can cause challenges when training deep neural networks to perform real time tracking of the person in the scene. We propose preliminary results with a deep reinforcement learning actor critic network for person tracking of 3D skeleton data using a small dataset. The proposed approach can achieve an average tracking rate of 68.92±15.90% given limited examples to train the network

    Urban Flood Extent Segmentation and Evaluation from Real-World Surveillance Camera Images Using Deep Convolutional Neural Network

    Get PDF
    This study explores the use of Deep Convolutional Neural Network (DCNN) for semantic segmentation of flood images. Imagery datasets of urban flooding were used to train two DCNN-based models, and camera images were used to test the application of the models with real-world data. Validation results show that both models extracted flood extent with a mean F1-score over 0.9. The factors that affected the performance included still water surface with specular reflection, wet road surface, and low illumination. In testing, reduced visibility during a storm and raindrops on surveillance cameras were major problems that affected the segmentation of flood extent. High-definition web cameras can be an alternative tool with the models trained on the data it collected. In conclusion, DCNN-based models can extract flood extent from camera images of urban flooding. The challenges with using these models on real-world data identified through this research present opportunities for future research
    corecore