64 research outputs found
Image-guided fluorescence tomography in head & neck surgical models
Clinical indications for fluorescence-guided surgery continue to expand, and are being spurred by the rapid development of new agents that improve biological targeting.1 There is a corresponding need to develop imaging systems that quantify fluorescence - not only at the tissue surface, but at depth. We have recently described an image-guided fluorescence tomography system that leverages geometric data from intraoperative cone-beam CT and surgical navigation,2 and builds on finite-element method software (NIRFAST) for diffuse optical tomography (DOT).3 DOT systems have most commonly been used for sub-surface inclusions buried within tissue (e.g., breast and neurological tumors). Here, we focus on inclusion models relevant to tumors infiltrating from the mucosal surface (an “iceberg” model), as is most often the case in head and neck cancer, where over 85% of tumors are squamous cell carcinoma.4 This work presents results from simulations, tissue-simulating anatomical phantoms, and animal studies involving infiltrative tumor models. The objective is to characterize system performance across a range of inclusion diameters, depths, and optical properties. For example, Fig. 1 shows a fluorescence reconstruction of a simulated tonsil tumor in an oral cavity phantom. Future clinical studies are necessary to assess in vivo performance and intraoperative workflow.
Please click Additional Files below to see the full abstract
SwarmDeepSurv: Swarm Intelligence Advances Deep Survival Network for Prognostic Radiomics Signatures in Four Solid Cancers
Survival models exist to study relationships between biomarkers and treatment effects. Deep learning-powered survival models supersede the classical Cox proportional hazards (CoxPH) model, but substantial performance drops were observed on high-dimensional features because of irrelevant/redundant information. To fill this gap, we proposed SwarmDeepSurv by integrating swarm intelligence algorithms with the deep survival model. Furthermore, four objective functions were designed to optimize prognostic prediction while regularizing selected feature numbers. When testing on multicenter sets (n = 1,058) of four different cancer types, SwarmDeepSurv was less prone to overfitting and achieved optimal patient risk stratification compared with popular survival modeling algorithms. Strikingly, SwarmDeepSurv selected different features compared with classical feature selection algorithms, including the least absolute shrinkage and selection operator (LASSO), with nearly no feature overlapping across these models. Taken together, SwarmDeepSurv offers an alternative approach to model relationships between radiomics features and survival endpoints, which can further extend to study other input data types including genomics
Predictors of Radiotherapy Induced Bone Injury (RIBI) after stereotactic lung radiotherapy
<p>Abstract</p> <p>Background</p> <p>The purpose of this study was to identify clinical and dosimetric factors associated with radiotherapy induced bone injury (RIBI) following stereotactic lung radiotherapy.</p> <p>Methods</p> <p>Inoperable patients with early stage non-small cell lung cancer, treated with SBRT, who received 54 or 60 Gy in 3 fractions, and had a minimum of 6 months follow up were reviewed. Archived treatment plans were retrieved, ribs delineated individually and treatment plans re-computed using heterogeneity correction. Clinical and dosimetric factors were evaluated for their association with rib fracture using logistic regression analysis; a dose-event curve and nomogram were created.</p> <p>Results</p> <p>46 consecutive patients treated between Oct 2004 and Dec 2008 with median follow-up 25 months (m) (range 6 – 51 m) were eligible. 41 fractured ribs were detected in 17 patients; median time to fracture was 21 m (range 7 – 40 m). The mean maximum point dose in non-fractured ribs (n = 1054) was 10.5 Gy ± 10.2 Gy, this was higher in fractured ribs (n = 41) 48.5 Gy ± 24.3 Gy (p < 0.0001). On univariate analysis, age, dose to 0.5 cc of the ribs (D<sub>0.5</sub>), and the volume of the rib receiving at least 25 Gy (V<sub>25</sub>), were significantly associated with RIBI. As D<sub>0.5</sub> and V<sub>25</sub> were cross-correlated (Spearman correlation coefficient: 0.57, p < 0.001), we selected D<sub>0.5</sub> as a representative dose parameter. On multivariate analysis, age (odds ratio: 1.121, 95% CI: 1.04 – 1.21, p = 0.003), female gender (odds ratio: 4.43, 95% CI: 1.68 – 11.68, p = 0.003), and rib D<sub>0.5</sub> (odds ratio: 1.0009, 95% CI: 1.0007 – 1.001, p < 0.0001) were significantly associated with rib fracture.</p> <p>Using D<sub>0.5,</sub> a dose-event curve was constructed estimating risk of fracture from dose at the median follow up of 25 months after treatment. In our cohort, a 50% risk of rib fracture was associated with a D<sub>0.5</sub> of 60 Gy.</p> <p>Conclusions</p> <p>Dosimetric and clinical factors contribute to risk of RIBI and both should be included when modeling risk of toxicity. A nomogram is presented using D<sub>0.5</sub>, age, and female gender to estimate risk of RIBI following SBRT. This requires validation.</p
Habitat Imaging Biomarkers for Diagnosis and Prognosis in Cancer Patients Infected with COVID-19
OBJECTIVES: Cancer patients have worse outcomes from the COVID-19 infection and greater need for ventilator support and elevated mortality rates than the general population. However, previous artificial intelligence (AI) studies focused on patients without cancer to develop diagnosis and severity prediction models. Little is known about how the AI models perform in cancer patients. In this study, we aim to develop a computational framework for COVID-19 diagnosis and severity prediction particularly in a cancer population and further compare it head-to-head to a general population.
METHODS: We have enrolled multi-center international cohorts with 531 CT scans from 502 general patients and 420 CT scans from 414 cancer patients. In particular, the habitat imaging pipeline was developed to quantify the complex infection patterns by partitioning the whole lung regions into phenotypically different subregions. Subsequently, various machine learning models nested with feature selection were built for COVID-19 detection and severity prediction.
RESULTS: These models showed almost perfect performance in COVID-19 infection diagnosis and predicting its severity during cross validation. Our analysis revealed that models built separately on the cancer population performed significantly better than those built on the general population and locked to test on the cancer population. This may be because of the significant difference among the habitat features across the two different cohorts.
CONCLUSIONS: Taken together, our habitat imaging analysis as a proof-of-concept study has highlighted the unique radiologic features of cancer patients and demonstrated effectiveness of CT-based machine learning model in informing COVID-19 management in the cancer population
Duhemian Themes in Expected Utility Theory
This monographic chapter explains how expected utility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow Duhem's recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth.
In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured.
This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend.
I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him.
In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories
Synthetic PET From CT Improves Diagnosis and Prognosis for Lung Cancer: Proof of Concept
[18F]Fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) are indispensable components in modern medicine. Although PET can provide additional diagnostic value, it is costly and not universally accessible, particularly in low-income countries. To bridge this gap, we have developed a conditional generative adversarial network pipeline that can produce FDG-PET from diagnostic CT scans based on multi-center multi-modal lung cancer datasets (n = 1,478). Synthetic PET images are validated across imaging, biological, and clinical aspects. Radiologists confirm comparable imaging quality and tumor contrast between synthetic and actual PET scans. Radiogenomics analysis further proves that the dysregulated cancer hallmark pathways of synthetic PET are consistent with actual PET. We also demonstrate the clinical values of synthetic PET in improving lung cancer diagnosis, staging, risk prediction, and prognosis. Taken together, this proof-of-concept study testifies to the feasibility of applying deep learning to obtain high-fidelity PET translated from CT
Enhancing NSCLC Recurrence Prediction With PET/CT Habitat Imaging, ctDNA, and Integrative Radiogenomics-Blood Insights
While we recognize the prognostic importance of clinicopathological measures and circulating tumor DNA (ctDNA), the independent contribution of quantitative image markers to prognosis in non-small cell lung cancer (NSCLC) remains underexplored. In our multi-institutional study of 394 NSCLC patients, we utilize pre-treatment computed tomography (CT) and 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) to establish a habitat imaging framework for assessing regional heterogeneity within individual tumors. This framework identifies three PET/CT subtypes, which maintain prognostic value after adjusting for clinicopathologic risk factors including tumor volume. Additionally, these subtypes complement ctDNA in predicting disease recurrence. Radiogenomics analysis unveil the molecular underpinnings of these imaging subtypes, highlighting downregulation in interferon alpha and gamma pathways in the high-risk subtype. In summary, our study demonstrates that these habitat imaging subtypes effectively stratify NSCLC patients based on their risk levels for disease recurrence after initial curative surgery or radiotherapy, providing valuable insights for personalized treatment approaches
Predicting Benefit From Immune Checkpoint Inhibitors in Patients With Non-Small-Cell Lung Cancer by CT-Based Ensemble Deep Learning: A Retrospective Study
BACKGROUND: Only around 20-30% of patients with non-small-cell lung cancer (NCSLC) have durable benefit from immune-checkpoint inhibitors. Although tissue-based biomarkers (eg, PD-L1) are limited by suboptimal performance, tissue availability, and tumour heterogeneity, radiographic images might holistically capture the underlying cancer biology. We aimed to investigate the application of deep learning on chest CT scans to derive an imaging signature of response to immune checkpoint inhibitors and evaluate its added value in the clinical context.
METHODS: In this retrospective modelling study, 976 patients with metastatic, EGFR/ALK negative NSCLC treated with immune checkpoint inhibitors at MD Anderson and Stanford were enrolled from Jan 1, 2014, to Feb 29, 2020. We built and tested an ensemble deep learning model on pretreatment CTs (Deep-CT) to predict overall survival and progression-free survival after treatment with immune checkpoint inhibitors. We also evaluated the added predictive value of the Deep-CT model in the context of existing clinicopathological and radiological metrics.
FINDINGS: Our Deep-CT model demonstrated robust stratification of patient survival of the MD Anderson testing set, which was validated in the external Stanford set. The performance of the Deep-CT model remained significant on subgroup analyses stratified by PD-L1, histology, age, sex, and race. In univariate analysis, Deep-CT outperformed the conventional risk factors, including histology, smoking status, and PD-L1 expression, and remained an independent predictor after multivariate adjustment. Integrating the Deep-CT model with conventional risk factors demonstrated significantly improved prediction performance, with overall survival C-index increases from 0·70 (clinical model) to 0·75 (composite model) during testing. On the other hand, the deep learning risk scores correlated with some radiomics features, but radiomics alone could not reach the performance level of deep learning, indicating that the deep learning model effectively captured additional imaging patterns beyond known radiomics features.
INTERPRETATION: This proof-of-concept study shows that automated profiling of radiographic scans through deep learning can provide orthogonal information independent of existing clinicopathological biomarkers, bringing the goal of precision immunotherapy for patients with NSCLC closer
- …