54 research outputs found

    Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning

    Get PDF
    Purpose: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. Methods: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). Results: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21–14.81%) and FL-PL (CI:11.82–13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32–12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34–26.10%). Furthermore, the Mann–Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. Conclusion: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers

    Multiview classification and dimensionality reduction of scalp and intracranial EEG data through tensor factorisation

    Get PDF
    Electroencephalography (EEG) signals arise as a mixture of various neural processes that occur in different spatial, frequency and temporal locations. In classification paradigms, algorithms are developed that can distinguish between these processes. In this work, we apply tensor factorisation to a set of EEG data from a group of epileptic patients and factorise the data into three modes; space, time and frequency with each mode containing a number of components or signatures. We train separate classifiers on various feature sets corresponding to complementary combinations of those modes and components and test the classification accuracy of each set. The relative influence on the classification accuracy of the respective spatial, temporal or frequency signatures can then be analysed and useful interpretations can be made. Additionaly, we show that through tensor factorisation we can perform dimensionality reduction by evaluating the classification performance with regards to the number mode components and by rejecting components with insignificant contribution to the classification accuracy

    Using PCR-SSCP technique to investigate polymorphism of leptin gene in Kermani sheep

    No full text
    شناسایی ژنهاي موثر بر تعادل انرژي، تولید شیر و مصرف خوراك از علاقهمنديهاي اخیر پژوهشگران اصلاح نژاداست. در ایران علی رغم وجود منابع غنی حیوانی، تلاشهاي اندکی براي شناسایی ژنهاي کنترل کنندهي صفات درآنها صورت گرفته است. بنابراین، شناسایی ژنهاي کاندیداي موثر بر صفات اقتصادي میتواند به اصلاح نژاد گوسفنددر کشور کمک شایانی نماید. در این تحقیق براي بررسی چند شکلی ژن لپتین از 120 رأس گوسفند نر و مادهي کرمانیایستگاه اصلاح نژاد شهر بابک خونگیري شد. پس از استخراج DNA با استفاده از کیت استخراج استاندارد، واکنشزنجیرهاي پلیمراز براي تکثیر قطعهي 275 جفت بازي از اگزون سوم این ژن انجام شد. پس از تعیین چند شکلی فضاییتکرشتهاي، محصولات PCR ،الگوهاي باندي مربوط به ژن لپتین با استفاده از ژل آکریل آمید و رنگ آمیزي نیترات،A/B/F ،A/B/E ،A/B/C ،A/C ،A/B ،C/C ،A/A الگوي 10 ،مطالعه مورد نمونهي در لپتین ژن براي. آمد بدست، نقرهF/C/A ،E/D/B/A وF/C/B/A بدست آمد که نشان دهندهي چندشکلی بالاي در ژن لپتین گوسفندان نژاد کرمانی می-باشد

    Energy Infrastructure Development Breakout Panel

    No full text
    This panel addressed the current challenges facing the country's energy infrastructure and discussed possible solutions

    Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework

    No full text
    PURPOSE: The generalizability and trustworthiness of deep learning (DL)-based algorithms depend on the size and heterogeneity of training datasets. However, because of patient privacy concerns and ethical and legal issues, sharing medical images between different centers is restricted. Our objective is to build a federated DL-based framework for PET image segmentation utilizing a multicentric dataset and to compare its performance with the centralized DL approach. METHODS: PET images from 405 head and neck cancer patients from 9 different centers formed the basis of this study. All tumors were segmented manually. PET images converted to SUV maps were resampled to isotropic voxels (3 × 3 × 3 mm3) and then normalized. PET image subvolumes (12 × 12 × 12 cm3) consisting of whole tumors and background were analyzed. Data from each center were divided into train/validation (80% of patients) and test sets (20% of patients). The modified R2U-Net was used as core DL model. A parallel federated DL model was developed and compared with the centralized approach where the data sets are pooled to one server. Segmentation metrics, including Dice similarity and Jaccard coefficients, percent relative errors (RE%) of SUVpeak, SUVmean, SUVmedian, SUVmax, metabolic tumor volume, and total lesion glycolysis were computed and compared with manual delineations. RESULTS: The performance of the centralized versus federated DL methods was nearly identical for segmentation metrics: Dice (0.84 ± 0.06 vs 0.84 ± 0.05) and Jaccard (0.73 ± 0.08 vs 0.73 ± 0.07). For quantitative PET parameters, we obtained comparable RE% for SUVmean (6.43% ± 4.72% vs 6.61% ± 5.42%), metabolic tumor volume (12.2% ± 16.2% vs 12.1% ± 15.89%), and total lesion glycolysis (6.93% ± 9.6% vs 7.07% ± 9.85%) and negligible RE% for SUVmax and SUVpeak. No significant differences in performance (P &gt; 0.05) between the 2 frameworks (centralized vs federated) were observed. CONCLUSION: The developed federated DL model achieved comparable quantitative performance with respect to the centralized DL model. Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing
    corecore