14 research outputs found
Is Oligometastatic Cancer Curable? A Survey of Oncologist Perspectives, Decision Making, and Communication
PURPOSE
Oligometastatic disease (OMD) refers to a limited state of metastatic cancer, which potentially derives benefit from local treatments. Given the relative novelty of this paradigm, oncologist perspectives on OMD are not well established. We thus explored oncologist views on curability of and treatment recommendations for patients with OMD.
METHODS AND MATERIALS
We developed a survey focused on oncologist views of 3 subtypes of OMD: synchronous, oligorecurrent, and oligoprogressive. Eligible participants included medical and radiation oncologists at 2 large cancer centers invited to participate between May and June 2022. Participants were presented with 3 hypothetical patient scenarios and asked about treatment recommendations, rationale, and demographic information.
RESULTS
Of 44 respondents, over half (61.4%) agreed that synchronous OMD is curable. A smaller proportion (46.2% and 13.5%) agreed for oligorecurrence and oligoprogression, respectively. When asked whether they use the word "cure" or "curative" in discussing prognosis, 31.8% and 33.3% agreed for synchronous and oligorecurrent OMD, respectively, while 78.4% disagreed for oligoprogression. Views on curability did not significantly affect treatment recommendations. More medical oncologists recommended systemic treatment only compared with radiation oncologists for the synchronous OMD (50.0% vs 5.3%; P < .01) and oligoprogression cases (43.8% vs 10.5%; P = .02), not the oligorecurrent case. There were no significant differences in confidence in treatment recommendations by specialty.
CONCLUSIONS
In this exploratory study, we found notable divergence in oncologists' views about curability of OMD as well as variability in treatment recommendations, suggesting need for more robust research on outcomes of patients with OMD
Machine Learning Applications in Head and Neck Radiation Oncology: Lessons From Open-Source Radiomics Challenges
Radiomics leverages existing image datasets to provide non-visible data extraction via image post-processing, with the aim of identifying prognostic, and predictive imaging features at a sub-region of interest level. However, the application of radiomics is hampered by several challenges such as lack of image acquisition/analysis method standardization, impeding generalizability. As of yet, radiomics remains intriguing, but not clinically validated. We aimed to test the feasibility of a non-custom-constructed platform for disseminating existing large, standardized databases across institutions for promoting radiomics studies. Hence, University of Texas MD Anderson Cancer Center organized two public radiomics challenges in head and neck radiation oncology domain. This was done in conjunction with MICCAI 2016 satellite symposium using Kaggle-in-Class, a machine-learning and predictive analytics platform. We drew on clinical data matched to radiomics data derived from diagnostic contrast-enhanced computed tomography (CECT) images in a dataset of 315 patients with oropharyngeal cancer. Contestants were tasked to develop models for (i) classifying patients according to their human papillomavirus status, or (ii) predicting local tumor recurrence, following radiotherapy. Data were split into training, and test sets. Seventeen teams from various professional domains participated in one or both of the challenges. This review paper was based on the contestants' feedback; provided by 8 contestants only (47%). Six contestants (75%) incorporated extracted radiomics features into their predictive model building, either alone (n = 5; 62.5%), as was the case with the winner of the “HPV” challenge, or in conjunction with matched clinical attributes (n = 2; 25%). Only 23% of contestants, notably, including the winner of the “local recurrence” challenge, built their model relying solely on clinical data. In addition to the value of the integration of machine learning into clinical decision-making, our experience sheds light on challenges in sharing and directing existing datasets toward clinical applications of radiomics, including hyper-dimensionality of the clinical/imaging data attributes. Our experience may help guide researchers to create a framework for sharing and reuse of already published data that we believe will ultimately accelerate the pace of clinical applications of radiomics; both in challenge or clinical settings
Recommended from our members
Moral distress and burnout in caring for older adults during medical school training.
BackgroundMoral distress is a reason for burnout in healthcare professionals, but the clinical settings in which moral distress is most often experienced by medical students, and whether moral distress is associated with burnout and career choices in medical students is unknown. We assessed moral distress in medical students while caring for older patients, and examined associations with burnout and interest in geriatrics.MethodsA cross-sectional survey study of second-, third-, and fourth-year medical students at an American medical school. The survey described 12 potentially morally distressing clinical scenarios involving older adult patients. Students reported if they encountered each scenario, and whether they experienced moral distress, graded on a 1-10 scale. We conducted a principal axis factor analysis to assess the dimensionality of the survey scenarios. A composite moral distress score was calculated as the sum of moral distress scores across all 12 scenarios. Burnout was assessed using the Maslach Abbreviated Burnout Inventory, and interest in geriatrics was rated on a 7-point Likert scale.ResultsTwo-hundred and nine students responded (47%), of whom 90% (188/209) reported moral distress in response to ≥1 scenario with a median (IQR) score of 6 (4-7). Factor analysis suggested a unidimensional factor structure of the 12 survey questions that reliably measured individual distress (Cronbach alpha = 0.78). Those in the highest tertile of composite moral distress scores were more likely to be burnt out (51%) than those in the middle tertile of scores (34%), or lowest tertile of scores (31%) (p = 0.02). There was a trend towards greater interest in geriatrics among those in the higher tertiles of composite moral distress scores (16% lowest tertile, 20% middle tertile, 25% highest tertile, p-for-tend = 0.21). Respondents suggested that moral distress might be mitigated with didactic sessions in inpatient geriatric care, and debriefing sessions with peers and faculty on the inpatient clerkships on medicine, neurology, and surgery, where students most often reported experiencing moral distress.ConclusionsMoral distress is highly prevalent among medical students while caring for older patients, and associated with burnout. Incorporating geriatrics education and debriefing sessions into inpatient clerkships could alleviate medical student moral distress and burnout
Recommended from our members
Trends, Quality, and Readability of Online Health Resources on Proton Radiation Therapy
Many patients weighing cancer treatment options may consider relatively novel options including proton radiation therapy (PRT) and turn to the Internet for online health resources (OHR). However, quality and readability of OHR for radiation oncology therapies has been shown to need improvement. Because the OHR that patients access can influence their treatment decisions, our study sought to understand the patterns of use, quality, and readability of OHR on PRT.
To validate the need to assess OHR on PRT, we assessed search patterns in the United States for the search phrase “proton therapy” using Google Trends. The Google search engine was then queried for websites with PRT information using 10 search phrases. The subsequent websites were analyzed for readability by the Flesch-Kincaid Grade Level and a Composite Grade Level (CGL) metric comprised of 5 readability metrics. Quality was analyzed using the DISCERN instrument.
Search volume index for “proton therapy” increased by an average of 2.0% each year for the last 15 years (January 1, 2005 to June 1, 2019, P < .001). States that had a greater number of proton centers tended to have a greater relative search volume in Google (P < .001). Of the 45 unique websites identified, the mean Flesch-Kincaid Grade Level was 12.0 (range, 7.3-18.6) and the mean CGL was 12.4 (range, 7-18). In addition, 80% of PRT pages required greater than 11th grade CGL. The mean DISCERN score of all websites was 39.8 out of 75, which corresponds to “fair” quality OHR.
Despite increasing interest in PRT OHR, in general, PRT websites require reading levels much higher than currently recommended, making PRT OHR less accessible to the average patient. Provision of high-quality PRT OHR at the appropriate reading level may increase comprehension of PRT, improve patient autonomy, and facilitate informed decision-making among radiation oncology patients
Readability of informed consent forms for cancer patients undergoing radiation therapy: A nationwide survey.
Clinical Trial Perceptions Among Patients with Gastrointestinal Cancer in an Academic Cancer Center
Poster presented at GW Medical Student Research Day 2022
Quantitative multiplex immune fluorescence to reveal the impact of chemoradiation therapy on modulation of the immune micro-environment of pancreatic ductal adenocarcinoma.
Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer:an observational study
BACKGROUND: Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. METHODS: In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. FINDINGS: We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83-0·92], p=0·0062; SD 0·86 [0·71-0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76-0·88) and SD 0·79 (0·68-0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56-0·80) and SD 0·50 (0·34-0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60-0·81) and SD 0·47 (0·35-0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013). INTERPRETATION: We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance. FUNDING: US National Institutes of Health and EU European Research Council