14 research outputs found

    Is Oligometastatic Cancer Curable? A Survey of Oncologist Perspectives, Decision Making, and Communication

    Full text link
    PURPOSE Oligometastatic disease (OMD) refers to a limited state of metastatic cancer, which potentially derives benefit from local treatments. Given the relative novelty of this paradigm, oncologist perspectives on OMD are not well established. We thus explored oncologist views on curability of and treatment recommendations for patients with OMD. METHODS AND MATERIALS We developed a survey focused on oncologist views of 3 subtypes of OMD: synchronous, oligorecurrent, and oligoprogressive. Eligible participants included medical and radiation oncologists at 2 large cancer centers invited to participate between May and June 2022. Participants were presented with 3 hypothetical patient scenarios and asked about treatment recommendations, rationale, and demographic information. RESULTS Of 44 respondents, over half (61.4%) agreed that synchronous OMD is curable. A smaller proportion (46.2% and 13.5%) agreed for oligorecurrence and oligoprogression, respectively. When asked whether they use the word "cure" or "curative" in discussing prognosis, 31.8% and 33.3% agreed for synchronous and oligorecurrent OMD, respectively, while 78.4% disagreed for oligoprogression. Views on curability did not significantly affect treatment recommendations. More medical oncologists recommended systemic treatment only compared with radiation oncologists for the synchronous OMD (50.0% vs 5.3%; P < .01) and oligoprogression cases (43.8% vs 10.5%; P = .02), not the oligorecurrent case. There were no significant differences in confidence in treatment recommendations by specialty. CONCLUSIONS In this exploratory study, we found notable divergence in oncologists' views about curability of OMD as well as variability in treatment recommendations, suggesting need for more robust research on outcomes of patients with OMD

    Machine Learning Applications in Head and Neck Radiation Oncology: Lessons From Open-Source Radiomics Challenges

    Get PDF
    Radiomics leverages existing image datasets to provide non-visible data extraction via image post-processing, with the aim of identifying prognostic, and predictive imaging features at a sub-region of interest level. However, the application of radiomics is hampered by several challenges such as lack of image acquisition/analysis method standardization, impeding generalizability. As of yet, radiomics remains intriguing, but not clinically validated. We aimed to test the feasibility of a non-custom-constructed platform for disseminating existing large, standardized databases across institutions for promoting radiomics studies. Hence, University of Texas MD Anderson Cancer Center organized two public radiomics challenges in head and neck radiation oncology domain. This was done in conjunction with MICCAI 2016 satellite symposium using Kaggle-in-Class, a machine-learning and predictive analytics platform. We drew on clinical data matched to radiomics data derived from diagnostic contrast-enhanced computed tomography (CECT) images in a dataset of 315 patients with oropharyngeal cancer. Contestants were tasked to develop models for (i) classifying patients according to their human papillomavirus status, or (ii) predicting local tumor recurrence, following radiotherapy. Data were split into training, and test sets. Seventeen teams from various professional domains participated in one or both of the challenges. This review paper was based on the contestants' feedback; provided by 8 contestants only (47%). Six contestants (75%) incorporated extracted radiomics features into their predictive model building, either alone (n = 5; 62.5%), as was the case with the winner of the “HPV” challenge, or in conjunction with matched clinical attributes (n = 2; 25%). Only 23% of contestants, notably, including the winner of the “local recurrence” challenge, built their model relying solely on clinical data. In addition to the value of the integration of machine learning into clinical decision-making, our experience sheds light on challenges in sharing and directing existing datasets toward clinical applications of radiomics, including hyper-dimensionality of the clinical/imaging data attributes. Our experience may help guide researchers to create a framework for sharing and reuse of already published data that we believe will ultimately accelerate the pace of clinical applications of radiomics; both in challenge or clinical settings

    On the Characterization and Justification of Moral Intuitions

    No full text

    Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer:an observational study

    No full text
    BACKGROUND: Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. METHODS: In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. FINDINGS: We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83-0·92], p=0·0062; SD 0·86 [0·71-0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76-0·88) and SD 0·79 (0·68-0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56-0·80) and SD 0·50 (0·34-0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60-0·81) and SD 0·47 (0·35-0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013). INTERPRETATION: We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance. FUNDING: US National Institutes of Health and EU European Research Council
    corecore