19 research outputs found
Slideflow: Deep Learning for Digital Histopathology with Real-Time Whole-Slide Visualization
Deep learning methods have emerged as powerful tools for analyzing
histopathological images, but current methods are often specialized for
specific domains and software environments, and few open-source options exist
for deploying models in an interactive interface. Experimenting with different
deep learning approaches typically requires switching software libraries and
reprocessing data, reducing the feasibility and practicality of experimenting
with new architectures. We developed a flexible deep learning library for
histopathology called Slideflow, a package which supports a broad array of deep
learning methods for digital pathology and includes a fast whole-slide
interface for deploying trained models. Slideflow includes unique tools for
whole-slide image data processing, efficient stain normalization and
augmentation, weakly-supervised whole-slide classification, uncertainty
quantification, feature generation, feature space analysis, and explainability.
Whole-slide image processing is highly optimized, enabling whole-slide tile
extraction at 40X magnification in 2.5 seconds per slide. The
framework-agnostic data processing pipeline enables rapid experimentation with
new methods built with either Tensorflow or PyTorch, and the graphical user
interface supports real-time visualization of slides, predictions, heatmaps,
and feature space characteristics on a variety of hardware devices, including
ARM-based devices such as the Raspberry Pi
A randomized phase 2 study of temsirolimus and cetuximab versus temsirolimus alone in recurrent/metastatic, cetuximab‐resistant head and neck cancer: The MAESTRO study
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155947/1/cncr32929_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/155947/2/cncr32929.pd
A randomized phase 2 network trial of tivantinib plus cetuximab versus cetuximab in patients with recurrent/metastatic head and neck squamous cell carcinoma
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/154935/1/cncr32762.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/154935/2/cncr32762_am.pd
Uncertainty-Informed Deep Learning Models Enable High-Confidence Predictions for Digital Histopathology
A model's ability to express its own predictive uncertainty is an essential
attribute for maintaining clinical user confidence as computational biomarkers
are deployed into real-world medical settings. In the domain of cancer digital
histopathology, we describe a novel, clinically-oriented approach to
uncertainty quantification (UQ) for whole-slide images, estimating uncertainty
using dropout and calculating thresholds on training data to establish cutoffs
for low- and high-confidence predictions. We train models to identify lung
adenocarcinoma vs. squamous cell carcinoma and show that high-confidence
predictions outperform predictions without UQ, in both cross-validation and
testing on two large external datasets spanning multiple institutions. Our
testing strategy closely approximates real-world application, with predictions
generated on unsupervised, unannotated slides using predetermined thresholds.
Furthermore, we show that UQ thresholding remains reliable in the setting of
domain shift, with accurate high-confidence predictions of adenocarcinoma vs.
squamous cell carcinoma for out-of-distribution, non-lung cancer cohorts
Recommended from our members
Machine Learning-Guided Adjuvant Treatment of Head and Neck Cancer
Importance: Postoperative chemoradiation is the standard of care for cancers with positive margins or extracapsular extension, but the benefit of chemotherapy is unclear for patients with other intermediate risk features. Objective: To evaluate whether machine learning models could identify patients with intermediate-risk head and neck squamous cell carcinoma who would benefit from chemoradiation.Design, Setting, and Participants: This cohort study included patients diagnosed with squamous cell carcinoma of the oral cavity, oropharynx, hypopharynx, or larynx from January 1, 2004, through December 31, 2016. Patients had resected disease and underwent adjuvant radiotherapy. Analysis was performed from October 1, 2019, through September 1, 2020. Patients were selected from the National Cancer Database, a hospital-based registry that captures data from more than 70% of newly diagnosed cancers in the United States. Three machine learning survival models were trained using 80% of the cohort, with the remaining 20% used to assess model performance. Exposures: Receipt of adjuvant chemoradiation or radiation alone.Main Outcomes and Measures: Patients who received treatment recommended by machine learning models were compared with those who did not. Overall survival for treatment according to model recommendations was the primary outcome. Secondary outcomes included frequency of recommendation for chemotherapy and chemotherapy benefit in patients recommended for chemoradiation vs radiation alone.Results: A total of 33527 patients (24189 [72%] men; 28036 [84%] aged ≤70 years) met the inclusion criteria. Median follow-up in the validation data set was 43.2 (interquartile range, 19.8-65.5) months. DeepSurv, neural multitask logistic regression, and survival forest models recommended chemoradiation for 17589 (52%), 15917 (47%), and 14912 patients (44%), respectively. Treatment according to model recommendations was associated with a survival benefit, with a hazard ratio of 0.79 (95% CI, 0.72-0.85; P Conclusions and Relevance: These findings suggest that machine learning models may identify patients with intermediate risk who could benefit from chemoradiation. These models predicted that approximately half of such patients have no added benefit from chemotherapy.</p
Recommended from our members
Slideflow: Deep learning for digital histopathology with real-time whole-slide visualization
Deep learning methods have emerged as powerful tools for analyzing histopathological images, but current methods are often specialized for specific domains and software environments, and few open-source options exist for deploying models in an interactive interface. Experimenting with different deep learning approaches typically requires switching software libraries and reprocessing data, reducing the feasibility and practicality of experimenting with new architectures. We developed a flexible deep learning library for histopathology called Slideflow, a package which supports a broad array of deep learning methods for digital pathology and includes a fast whole-slide interface for deploying trained models. Slideflow includes unique tools for whole-slide image data processing, efficient stain normalization and augmentation, weakly-supervised whole-slide classification, uncertainty quantification, feature generation, feature space analysis, and explainability. Whole-slide image processing is highly optimized, enabling whole-slide tile extraction at 40x magnification in 2.5 s per slide. The framework-agnostic data processing pipeline enables rapid experimentation with new methods built with either Tensorflow or PyTorch, and the graphical user interface supports real-time visualization of slides, predictions, heatmaps, and feature space characteristics on a variety of hardware devices, including ARM-based devices such as the Raspberry Pi
Data augmentation and multimodal learning for predicting drug response in patient-derived xenografts from gene expressions and histology images
Patient-derived xenografts (PDXs) are an appealing platform for preclinical drug studies. A primary challenge in modeling drug response prediction (DRP) with PDXs and neural networks (NNs) is the limited number of drug response samples. We investigate multimodal neural network (MM-Net) and data augmentation for DRP in PDXs. The MM-Net learns to predict response using drug descriptors, gene expressions (GE), and histology whole-slide images (WSIs). We explore whether combining WSIs with GE improves predictions as compared with models that use GE alone. We propose two data augmentation methods which allow us training multimodal and unimodal NNs without changing architectures with a single larger dataset: 1) combine single-drug and drug-pair treatments by homogenizing drug representations, and 2) augment drug-pairs which doubles the sample size of all drug-pair samples. Unimodal NNs which use GE are compared to assess the contribution of data augmentation. The NN that uses the original and the augmented drug-pair treatments as well as single-drug treatments outperforms NNs that ignore either the augmented drug-pairs or the single-drug treatments. In assessing the multimodal learning based on the MCC metric, MM-Net outperforms all the baselines. Our results show that data augmentation and integration of histology images with GE can improve prediction performance of drug response in PDXs