19 research outputs found

    Spin-echo and diffusion-weighted MRI in differentiation between progressive massive fibrosis and lung cancer

    Get PDF
    PURPOSEWe aimed to investigate the value of magnetic resonance imaging (MRI)-based parameters in differentiating between progressive massive fibrosis (PMF) and lung cancer.METHODSThis retrospective study included 60 male patients (mean age, 67.0±9.0 years) with a history of more than 10 years working in underground coal mines who underwent 1.5 T MRI of thorax due to a lung nodule/mass suspicious for lung cancer on computed tomography. Thirty patients had PMF, and the remaining ones had lung cancer diagnosed histopathologically. The sequences were as follows: coronal single-shot turbo spin echo (SSH-TSE), axial T1- and T2-weighted spin-echo (SE), balanced turbo field echo, T1-weighted high-resolution isotropic volume excitation, free-breathing and respiratory triggered diffusion-weighted imaging (DWI). The patients’ demographics, lesion sizes, and MRI‐derived parameters were compared between the patients with PMF and lung cancer.RESULTSApparent diffusion coefficient (ADC) values of DWI and respiratory triggered DWI, signal intensities on T1-weighted SE, T2-weighted SE, and SSH-TSE imaging were found to be significantly different between the groups (p < 0.001, for all comparisons). Median ADC values of free-breathing DWI in patients with PMF and cancer were 1.25 (0.93–2.60) and 0.76 (0.53–1.00) (× 10-3 mm2/s), respectively. Most PMF lesions were predominantly iso- or hypointense on T1-weighted SE, T2-weighted SE, and SSH-TSE, while most malignant ones predominantly showed high signal intensity on these sequences.CONCLUSIONMRI study including SE imaging, specially T1-weighted SE imaging and ADC values of DWI can help to distinguish PMF from lung cancer

    Federated Learning on Heterogenous Data using Chest CT

    Full text link
    Large data have accelerated advances in AI. While it is well known that population differences from genetics, sex, race, diet, and various environmental factors contribute significantly to disease, AI studies in medicine have largely focused on locoregional patient cohorts with less diverse data sources. Such limitation stems from barriers to large-scale data share in medicine and ethical concerns over data privacy. Federated learning (FL) is one potential pathway for AI development that enables learning across hospitals without data share. In this study, we show the results of various FL strategies on one of the largest and most diverse COVID-19 chest CT datasets: 21 participating hospitals across five continents that comprise >10,000 patients with >1 million images. We present three techniques: Fed Averaging (FedAvg), Incremental Institutional Learning (IIL), and Cyclical Incremental Institutional Learning (CIIL). We also propose an FL strategy that leverages synthetically generated data to overcome class imbalances and data size disparities across centers. We show that FL can achieve comparable performance to Centralized Data Sharing (CDS) while maintaining high performance across sites with small, underrepresented data. We investigate the strengths and weaknesses for all technical approaches on this heterogeneous dataset including the robustness to non-Independent and identically distributed (non-IID) diversity of data. We also describe the sources of data heterogeneity such as age, sex, and site locations in the context of FL and show how even among the correctly labeled populations, disparities can arise due to these biases

    Deep COVID DeteCT: an international experience on COVID-19 lung detection and prognosis using chest CT

    Get PDF
    The Coronavirus disease 2019 (COVID-19) presents open questions in how we clinically diagnose and assess disease course. Recently, chest computed tomography (CT) has shown utility for COVID-19 diagnosis. In this study, we developed Deep COVID DeteCT (DCD), a deep learning convolutional neural network (CNN) that uses the entire chest CT volume to automatically predict COVID-19 (COVID+) from non-COVID-19 (COVID−) pneumonia and normal controls. We discuss training strategies and differences in performance across 13 international institutions and 8 countries. The inclusion of non-China sites in training significantly improved classification performance with area under the curve (AUCs) and accuracies above 0.8 on most test sites. Furthermore, using available follow-up scans, we investigate methods to track patient disease course and predict prognosis

    Virtual reality tumor navigated robotic radical prostatectomy by using three-dimensional reconstructed multiparametric prostate MRI and 68Ga-PSMA PET/CT images: a useful tool to guide the robotic surgery?

    No full text
    Objectives: To evaluate the use and benefits of tumor navigation during performing robotic assisted radical prostatectomy (RARP). Patients and Methods: Borders of the visible tumor(s) was/were and surrounding structures marked on multiparametric prostate magnetic resonance imaging (mpMRI) and 68Ga-labeled prostate-specific membrane antigen ligand using positron emission computed tomography (Ga68 PSMA-PET/CT). Three dimensional (3D) reconstruction of the images were done that were transferred to virtual reality (VR) headsets and Da Vinci surgical robot via TilePro. Images were used as a guide during RARP procedures in five cases. Indocyanine green (ICG) guided pelvic lymph node dissection (n = 2) and Martini Klinik Neurosafe technique (n = 2) were also applied. Results: Mean patient age was 60.6 ± 3.7 years (range, 56-66). All VR models were finalized with the agreement of radiologist, urologist, nuclear physician, and engineer. Surgeon examined images before the surgery. All VR models were found very useful particularly in pT3 diseases. Pathological stages included pT2N0 (n = 1), pT3aN0 (n = 1), pT3aN1 (n = 2), and pT3bN1 (n = 1). Positive surgical margins (SMs) occurred in two patients with extensive disease (pT3aN1 and pT3bN1) and tumor occupied 30% and 50% of the prostate volumes. Mean estimated blood loss was 150 ± 86.6 cc (range, 100-300). Mean follow-up was 3.4 ± 1.7 months (range, 2-6). No complication occurred during perioperative (0-30 days) and postoperative (30-90 days) periods in any patient. Conclusions: 3D reconstructed VR models by using mpMRI and Ga68 PSMA-PET/CT images can be accurately prepared and effectively applied during RARP that might be a useful tool for tumor navigation. Images show prostate tumors and anatomy and might be a guide for the console surgeon. This is promising new technology that needs further study and validation
    corecore