2 research outputs found

    Radiomics and Deep Features: Robust Classification of Brain Hemorrhages and Reproducibility Analysis Using a 3D Autoencoder Neural Network

    No full text
    This study evaluates the reproducibility of machine learning models that integrate radiomics and deep features (features extracted from a 3D autoencoder neural network) to classify various brain hemorrhages effectively. Using a dataset of 720 patients, we extracted 215 radiomics features (RFs) and 15,680 deep features (DFs) from CT brain images. With rigorous screening based on Intraclass Correlation Coefficient thresholds (>0.75), we identified 135 RFs and 1054 DFs for analysis. Feature selection techniques such as Boruta, Recursive Feature Elimination (RFE), XGBoost, and ExtraTreesClassifier were utilized alongside 11 classifiers, including AdaBoost, CatBoost, Decision Trees, LightGBM, Logistic Regression, Naive Bayes, Neural Networks, Random Forest, Support Vector Machines (SVM), and k-Nearest Neighbors (k-NN). Evaluation metrics included Area Under the Curve (AUC), Accuracy (ACC), Sensitivity (SEN), and F1-score. The model evaluation involved hyperparameter optimization, a 70:30 trainā€“test split, and bootstrapping, further validated with the Wilcoxon signed-rank test and q-values. Notably, DFs showed higher accuracy. In the case of RFs, the Boruta + SVM combination emerged as the optimal model for AUC, ACC, and SEN, while XGBoost + Random Forest excelled in F1-score. Specifically, RFs achieved AUC, ACC, SEN, and F1-scores of 0.89, 0.85, 0.82, and 0.80, respectively. Among DFs, the ExtraTreesClassifier + Naive Bayes combination demonstrated remarkable performance, attaining an AUC of 0.96, ACC of 0.93, SEN of 0.92, and an F1-score of 0.92. Distinguished models in the RF category included SVM with Boruta, Logistic Regression with XGBoost, SVM with ExtraTreesClassifier, CatBoost with XGBoost, and Random Forest with XGBoost, each yielding significant q-values of 42. In the DFs realm, ExtraTreesClassifier + Naive Bayes, ExtraTreesClassifier + Random Forest, and Boruta + k-NN exhibited robustness, with 43, 43, and 41 significant q-values, respectively. This investigation underscores the potential of synergizing DFs with machine learning models to serve as valuable screening tools, thereby enhancing the interpretation of head CT scans for patients with brain hemorrhages

    Differential privacy preserved federated learning for prognostic modeling in COVIDā€19 patients using large multiā€institutional chest CT dataset

    No full text
    Background Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVIDā€19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multiā€institutional cohort of patients with COVIDā€19 using a DLā€based model. Purpose This study aimed to evaluate the performance of deep privacyā€preserving federated learning (DPFL) in predicting COVIDā€19 outcomes using chest CT images. Methods After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a holdā€out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the holdā€out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. Results The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79ā€“0.85) and (95% CI: 0.77ā€“0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models ( p ā€value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 Ā± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. Conclusion The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multiā€institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.</p
    corecore