6 research outputs found
Monkeypox detection using deep neural networks
BACKGROUND: In May 2022, the World Health Organization (WHO) European Region announced an atypical Monkeypox epidemic in response to reports of numerous cases in some member countries unrelated to those where the illness is endemic. This issue has raised concerns about the widespread nature of this disease around the world. The experience with Coronavirus Disease 2019 (COVID-19) has increased awareness about pandemics among researchers and health authorities.METHODS: Deep Neural Networks (DNNs) have shown promising performance in detecting COVID-19 and predicting its outcomes. As a result, researchers have begun applying similar methods to detect Monkeypox disease. In this study, we utilize a dataset comprising skin images of three diseases: Monkeypox, Chickenpox, Measles, and Normal cases. We develop seven DNN models to identify Monkeypox from these images. Two scenarios of including two classes and four classes are implemented.RESULTS: The results show that our proposed DenseNet201-based architecture has the best performance, with Accuracy = 97.63%, F1-Score = 90.51%, and Area Under Curve (AUC) = 94.27% in two-class scenario; and Accuracy = 95.18%, F1-Score = 89.61%, AUC = 92.06% for four-class scenario. Comparing our study with previous studies with similar scenarios, shows that our proposed model demonstrates superior performance, particularly in terms of the F1-Score metric. For the sake of transparency and explainability, Local Interpretable Model-Agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-Cam) were developed to interpret the results. These techniques aim to provide insights into the decision-making process, thereby increasing the trust of clinicians.CONCLUSION: The DenseNet201 model outperforms the other models in terms of the confusion metrics, regardless of the scenario. One significant accomplishment of this study is the utilization of LIME and Grad-Cam to identify the affected areas and assess their significance in diagnosing diseases based on skin images. By incorporating these techniques, we enhance our understanding of the infected regions and their relevance in distinguishing Monkeypox from other similar diseases. Our proposed model can serve as a valuable auxiliary tool for diagnosing Monkeypox and distinguishing it from other related conditions.</p
Early Prediction of Alzheimer’s Disease Using Interpretable Machine Learning Algorithms
Introduction: Alzheimer’s disease is one of the most common neurodegenerative diseases in adults. The progressive nature of Alzheimer’s disease causes widespread damage to the brain, and early diagnosis can manage the disease and slow down its progression effectively.
Method: In this study, a dataset related to the early prediction of Alzheimer’s was used. Spark framework was used for data management and three machine learning algorithms including Naïve Bayes, Decision Tree, and Artificial Neural Networks with the best hyperparameters were implemented and compared. To prevent overfitting and measure the efficiency of the models, a 5-fold cross-validation method was utilized. Furthermore, a method was used for interpreting machine learning black box models.
Results: The decision tree and artificial neural network models obtained 98.61% accuracy and 98.60% F1-Score in the Spark framework including one or three computers. Important features in the decision-making process of the artificial neural network were identified using the interpretability technique. In addition, the computational time required for training the proposed models was calculated through different approaches, and the use of multiple computers was 35.95% faster than a single computer.
Conclusion: With increasing the number of Alzheimer’s disease patients around the world, the need for a decision support system using machine learning algorithms, which can predict the disease early in a huge amount of data, is felt more. Therefore, the machine learning models proposed in this study for early prediction of Alzheimer’s disease as an interpretable auxiliary tool in the decision-making process can help clinicians
COVID-19 diagnosis from routine blood tests using artificial intelligence techniques
Coronavirus disease (COVID-19) is a unique worldwide pandemic. With new mutations of the virus with higher transmission rates, it is imperative to diagnose positive cases as quickly and accurately as possible. Therefore, a fast, accurate, and automatic system for COVID-19 diagnosis can be very useful for clinicians. In this study, seven machine learning and four deep learning models were presented to diagnose positive cases of COVID-19 from three routine laboratory blood tests datasets. Three correlation coefficient methods, i.e., Pearson, Spearman, and Kendall, were used to demonstrate the relevance among samples. A four-fold cross-validation method was used to train, validate, and test the proposed models. In all three datasets, the proposed deep neural network (DNN) model achieved the highest values of accuracy, precision, recall or sensitivity, specificity, F1-Score, AUC, and MCC. On average, accuracy 92.11%, specificity 84.56%, and AUC 92.20% values have been obtained in the first dataset. In the second dataset, on average, accuracy 93.16%, specificity 93.02%, and AUC 93.20% values have been obtained. Finally, in the third dataset, on average, the values of accuracy 92.5%, specificity 85%, and AUC 92.20% have been obtained. In this study, we used a statistical t-test to validate the results. Finally, using artificial intelligence interpretation methods, important and impactful features in the developed model were presented. The proposed DNN model can be used as a supplementary tool for diagnosing COVID-19, which can quickly provide clinicians with highly accurate diagnoses of positive cases in a timely manner.</p
COVID-19 diagnosis from routine blood tests using artificial intelligence techniques
Coronavirus disease (COVID-19) is a unique worldwide pandemic. With new mutations of the virus with higher transmission rates, it is imperative to diagnose positive cases as quickly and accurately as possible. Therefore, a fast, accurate, and automatic system for COVID-19 diagnosis can be very useful for clinicians. In this study, seven machine learning and four deep learning models were presented to diagnose positive cases of COVID-19 from three routine laboratory blood tests datasets. Three correlation coefficient methods, i.e., Pearson, Spearman, and Kendall, were used to demonstrate the relevance among samples. A four-fold cross-validation method was used to train, validate, and test the proposed models. In all three datasets, the proposed deep neural network (DNN) model achieved the highest values of accuracy, precision, recall or sensitivity, specificity, F1-Score, AUC, and MCC. On average, accuracy 92.11%, specificity 84.56%, and AUC 92.20% values have been obtained in the first dataset. In the second dataset, on average, accuracy 93.16%, specificity 93.02%, and AUC 93.20% values have been obtained. Finally, in the third dataset, on average, the values of accuracy 92.5%, specificity 85%, and AUC 92.20% have been obtained. In this study, we used a statistical t-test to validate the results. Finally, using artificial intelligence interpretation methods, important and impactful features in the developed model were presented. The proposed DNN model can be used as a supplementary tool for diagnosing COVID-19, which can quickly provide clinicians with highly accurate diagnoses of positive cases in a timely manner.</p
Survival prediction of glioblastoma patients using modern deep learning and machine learning techniques
Abstract In this study, we utilized data from the Surveillance, Epidemiology, and End Results (SEER) database to predict the glioblastoma patients’ survival outcomes. To assess dataset skewness and detect feature importance, we applied Pearson's second coefficient test of skewness and the Ordinary Least Squares method, respectively. Using two sampling strategies, holdout and five-fold cross-validation, we developed five machine learning (ML) models alongside a feed-forward deep neural network (DNN) for the multiclass classification and regression prediction of glioblastoma patient survival. After balancing the classification and regression datasets, we obtained 46,340 and 28,573 samples, respectively. Shapley additive explanations (SHAP) were then used to explain the decision-making process of the best model. In both classification and regression tasks, as well as across holdout and cross-validation sampling strategies, the DNN consistently outperformed the ML models. Notably, the accuracy were 90.25% and 90.22% for holdout and five-fold cross-validation, respectively, while the corresponding R2 values were 0.6565 and 0.6622. SHAP analysis revealed the importance of age at diagnosis as the most influential feature in the DNN's survival predictions. These findings suggest that the DNN holds promise as a practical auxiliary tool for clinicians, aiding them in optimal decision-making concerning the treatment and care trajectories for glioblastoma patients
Application of machine learning techniques for predicting survival in ovarian cancer
Abstract Background Ovarian cancer is the fifth leading cause of mortality among women in the United States. Ovarian cancer is also known as forgotten cancer or silent disease. The survival of ovarian cancer patients depends on several factors, including the treatment process and the prognosis. Methods The ovarian cancer patients’ dataset is compiled from the Surveillance, Epidemiology, and End Results (SEER) database. With the help of a clinician, the dataset is curated, and the most relevant features are selected. Pearson’s second coefficient of skewness test is used to evaluate the skewness of the dataset. Pearson correlation coefficient is also used to investigate the associations between features. Statistical test is utilized to evaluate the significance of the features. Six Machine Learning (ML) models, including K-Nearest Neighbors , Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Adaptive Boosting (AdaBoost), and Extreme Gradient Boosting (XGBoost), are implemented for survival prediction in both classification and regression approaches. An interpretable method, Shapley Additive Explanations (SHAP), is applied to clarify the decision-making process and determine the importance of each feature in prediction. Additionally, DTs of the RF model are displayed to show how the model predicts the survival intervals. Results Our results show that RF (Accuracy = 88.72%, AUC = 82.38%) and XGBoost (Root Mean Squad Error (RMSE)) = 20.61%, R 2 = 0.4667) have the best performance for classification and regression approaches, respectively. Furthermore, using the SHAP method along with extracted DTs of the RF model, the most important features in the dataset are identified. Histologic type ICD-O-3, chemotherapy recode, year of diagnosis, age at diagnosis, tumor stage, and grade are the most important determinant factors in survival prediction. Conclusion To the best of our knowledge, our study is the first study that develops various ML models to predict ovarian cancer patients’ survival on the SEER database in both classification and regression approaches. These ML algorithms also achieve more accurate results and outperform statistical methods. Furthermore, our study is the first study to use the SHAP method to increase confidence and transparency of the proposed models’ prediction for clinicians. Moreover, our developed models, as an automated auxiliary tool, can help clinicians to have a better understanding of the estimated survival as well as important features that affect survival