6 research outputs found

    Digital Finite -Field for Data Coding and Error Correction in GF(2 m )

    No full text
    Abstract-Data coding and encoding standards require the high-performance error detection and correction algorithms. This paper presents the design error detection and corrects it by Galois Field and binary polynomial and extraction of its root. We start from

    Deep learning-based fully automated Z-axis coverage range definition from scout scans to eliminate overscanning in chest CT imaging

    Get PDF
    Background: Despite the prevalence of chest CT in the clinic, concerns about unoptimized protocols delivering high radiation doses to patients still remain. This study aimed to assess the additional radiation dose associated with overscanning in chest CT and to develop an automated deep learning-assisted scan range selection technique to reduce radiation dose to patients. Results: A significant overscanning range (31 ± 24) mm was observed in clinical setting for over 95% of the cases. The average Dice coefficient for lung segmentation was 0.96 and 0.97 for anterior–posterior (AP) and lateral projections, respectively. By considering the exact lung coverage as the ground truth, and AP and lateral projections as input, The DL-based approach resulted in errors of 0.08 ± 1.46 and − 1.5 ± 4.1 mm in superior and inferior directions, respectively. In contrast, the error on external scout views was − 0.7 ± 4.08 and 0.01 ± 14.97 mm for superior and inferior directions, respectively.The ED reduction achieved by automated scan range selection was 21% in the test group. The evaluation of a large multi-centric chest CT dataset revealed unnecessary ED of more than 2 mSv per scan and 67% increase in the thyroid absorbed dose. Conclusion: The proposed DL-based solution outperformed previous automatic methods with acceptable accuracy, even in complicated and challenging cases. The generizability of the model was demonstrated by fine-tuning the model on AP scout views and achieving acceptable results. The method can reduce the unoptimized dose to patients by exclunding unnecessary organs from field of view.</p

    Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning

    Get PDF
    Purpose: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. Methods: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). Results: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value &gt; 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value &lt; 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. Conclusion: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework [...]</p

    Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework

    Get PDF
    Purpose: The generalizability and trustworthiness of deep learning (DL)-based algorithms depend on the size and heterogeneity of training datasets. However, because of patient privacy concerns and ethical and legal issues, sharing medical images between different centers is restricted. Our objective is to build a federated DL-based framework for PET image segmentation utilizing a multicentric dataset and to compare its performance with the centralized DL approach. Methods: PET images from 405 head and neck cancer patients from 9 different centers formed the basis of this study. All tumors were segmented manually. PET images converted to SUV maps were resampled to isotropic voxels (3 × 3 × 3 mm3) and then normalized. PET image subvolumes (12 × 12 × 12 cm3) consisting of whole tumors and background were analyzed. Data from each center were divided into train/validation (80% of patients) and test sets (20% of patients). The modified R2U-Net was used as core DL model. A parallel federated DL model was developed and compared with the centralized approach where the data sets are pooled to one server. Segmentation metrics, including Dice similarity and Jaccard coefficients, percent relative errors (RE%) of SUVpeak, SUVmean, SUVmedian, SUVmax, metabolic tumor volume, and total lesion glycolysis were computed and compared with manual delineations. Results: The performance of the centralized versus federated DL methods was nearly identical for segmentation metrics: Dice (0.84 ± 0.06 vs 0.84 ± 0.05) and Jaccard (0.73 ± 0.08 vs 0.73 ± 0.07). For quantitative PET parameters, we obtained comparable RE% for SUVmean (6.43% ± 4.72% vs 6.61% ± 5.42%), metabolic tumor volume (12.2% ± 16.2% vs 12.1% ± 15.89%), and total lesion glycolysis (6.93% ± 9.6% vs 7.07% ± 9.85%) and negligible RE% for SUVmax and SUVpeak. No significant differences in performance (P &gt; 0.05) between the 2 frameworks (centralized vs federated) were observed. Conclusion: The developed federated DL model achieved comparable quantitative performance with respect to the centralized DL model. Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing.</p

    Differentiation of COVID‐19 pneumonia from other lung diseases using CT radiomic features and machine learning : A large multicentric cohort study

    Get PDF
    To derive and validate an effective machine learning and radiomics‐based model to differentiate COVID‐19 pneumonia from other lung diseases using a large multi‐centric dataset. In this retrospective study, we collected 19 private and five public datasets of chest CT images, accumulating to 26 307 images (15 148 COVID‐19; 9657 other lung diseases including non‐COVID‐19 pneumonia, lung cancer, pulmonary embolism; 1502 normal cases). We tested 96 machine learning‐based models by cross‐combining four feature selectors (FSs) and eight dimensionality reduction techniques with eight classifiers. We trained and evaluated our models using three different strategies: #1, the whole dataset (15 148 COVID‐19 and 11 159 other); #2, a new dataset after excluding healthy individuals and COVID‐19 patients who did not have RT‐PCR results (12 419 COVID‐19 and 8278 other); and #3 only non‐COVID‐19 pneumonia patients and a random sample of COVID‐19 patients (3000 COVID‐19 and 2582 others) to provide balanced classes. The best models were chosen by one‐standard‐deviation rule in 10‐fold cross‐validation and evaluated on the hold out test sets for reporting. In strategy#1, Relief FS combined with random forest (RF) classifier resulted in the highest performance (accuracy = 0.96, AUC = 0.99, sensitivity = 0.98, specificity = 0.94, PPV = 0.96, and NPV = 0.96). In strategy#2, Recursive Feature Elimination (RFE) FS and RF classifier combination resulted in the highest performance (accuracy = 0.97, AUC = 0.99, sensitivity = 0.98, specificity = 0.95, PPV = 0.96, NPV = 0.98). Finally, in strategy #3, the ANOVA FS and RF classifier combination resulted in the highest performance (accuracy = 0.94, AUC =0.98, sensitivity = 0.96, specificity = 0.93, PPV = 0.93, NPV = 0.96). Lung radiomic features combined with machine learning algorithms can enable the effective diagnosis of COVID‐19 pneumonia in CT images without the use of additional tests

    COVID-19 prognostic modeling using CT radiomic features and machine learning algorithms: Analysis of a multi-institutional dataset of 14,339 patients

    Get PDF
    Background: We aimed to analyze the prognostic power of CT-based radiomics models using data of 14,339 COVID-19 patients. Methods: Whole lung segmentations were performed automatically using a deep learning-based model to extract 107 intensity and texture radiomics features. We used four feature selection algorithms and seven classifiers. We evaluated the models using ten different splitting and cross-validation strategies, including non-harmonized and ComBat-harmonized datasets. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were reported. Results: In the test dataset (4,301) consisting of CT and/or RT-PCR positive cases, AUC, sensitivity, and specificity of 0.83 ± 0.01 (CI95%: 0.81-0.85), 0.81, and 0.72, respectively, were obtained by ANOVA feature selector + Random Forest (RF) classifier. Similar results were achieved in RT-PCR-only positive test sets (3,644). In ComBat harmonized dataset, Relief feature selector + RF classifier resulted in the highest performance of AUC, reaching 0.83 ± 0.01 (CI95%: 0.81-0.85), with a sensitivity and specificity of 0.77 and 0.74, respectively. ComBat harmonization did not depict statistically significant improvement compared to a non-harmonized dataset. In leave-one-center-out, the combination of ANOVA feature selector and RF classifier resulted in the highest performance. Conclusion: Lung CT radiomics features can be used for robust prognostic modeling of COVID-19. The predictive power of the proposed CT radiomics model is more reliable when using a large multicentric heterogeneous dataset, and may be used prospectively in clinical setting to manage COVID-19 patients.</p
    corecore