12 research outputs found
Anomaly detection system based on deep learning for cyber physical systems on sensory and network datasets
Cyber-physical systems (CPSs), a type of computing system integrated with physical devices, are widely used in many areas such as manufacturing, traffic control, and energy. The integration of CPS and networks has expanded the range of cyber threats. Intrusion detection systems (IDSs), use signature based and machine learning based techniques to protect networks, against threats in CPSs. Water purifying plants are among the important CPSs. In this context some research uses a dataset obtained from secure water treatment (SWaT) an operational water treatment testbed. These works usually focus solely on sensory dataset and omit the analysis of network dataset, or they focus on network information and omit sensory data. In this paper we work on both datasets. We have created IDSs using five traditional machine learning techniques, decision tree, support vector machine (SVM), random forest, naïve Bayes, and artificial neural network along with two deep methods, deep neural network, and convolutional neural network. We experimented with IDSs, on three different datasets obtained from SWaT, including network data, sensory data, and Modbus data. The accuracies of proposed methods show higher values on all datasets especially on sensory (99.9%) and Modbus data (95%) and superiority of random forest and deep learning methods compared to others
Monkeypox detection using deep neural networks
BACKGROUND: In May 2022, the World Health Organization (WHO) European Region announced an atypical Monkeypox epidemic in response to reports of numerous cases in some member countries unrelated to those where the illness is endemic. This issue has raised concerns about the widespread nature of this disease around the world. The experience with Coronavirus Disease 2019 (COVID-19) has increased awareness about pandemics among researchers and health authorities.METHODS: Deep Neural Networks (DNNs) have shown promising performance in detecting COVID-19 and predicting its outcomes. As a result, researchers have begun applying similar methods to detect Monkeypox disease. In this study, we utilize a dataset comprising skin images of three diseases: Monkeypox, Chickenpox, Measles, and Normal cases. We develop seven DNN models to identify Monkeypox from these images. Two scenarios of including two classes and four classes are implemented.RESULTS: The results show that our proposed DenseNet201-based architecture has the best performance, with Accuracy = 97.63%, F1-Score = 90.51%, and Area Under Curve (AUC) = 94.27% in two-class scenario; and Accuracy = 95.18%, F1-Score = 89.61%, AUC = 92.06% for four-class scenario. Comparing our study with previous studies with similar scenarios, shows that our proposed model demonstrates superior performance, particularly in terms of the F1-Score metric. For the sake of transparency and explainability, Local Interpretable Model-Agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-Cam) were developed to interpret the results. These techniques aim to provide insights into the decision-making process, thereby increasing the trust of clinicians.CONCLUSION: The DenseNet201 model outperforms the other models in terms of the confusion metrics, regardless of the scenario. One significant accomplishment of this study is the utilization of LIME and Grad-Cam to identify the affected areas and assess their significance in diagnosing diseases based on skin images. By incorporating these techniques, we enhance our understanding of the infected regions and their relevance in distinguishing Monkeypox from other similar diseases. Our proposed model can serve as a valuable auxiliary tool for diagnosing Monkeypox and distinguishing it from other related conditions.</p
Learning Bilingual Word Embedding Mappings with Similar Words in Related Languages Using GAN
Cross-lingual word embeddings display words from different languages in the same vector space. They provide reasoning about semantics, compare the meaning of words across languages and word meaning in multilingual contexts, necessary to bilingual lexicon induction, machine translation, and cross-lingual information retrieval. This paper proposes an efficient approach to learn bilingual transform mapping between monolingual word embeddings in language pairs. We choose ten different languages from three different language families and downloaded their last update Wikipedia dumps1 Then, with some pre-processing steps and using word2vec, we produce word embeddings for them. We select seven language pairs from chosen languages. Since the selected languages are relative, they have thousands of identical words with similar meanings. With these identical dictation words and word embedding models of each language, we create training, validation and, test sets for the language pairs. We then use a generative adversarial network (GAN) to learn the transform mapping between word embeddings of source and target languages. The average accuracy of our proposed method in all language pairs is 71.34%. The highest accuracy is achieved for the Turkish-Azerbaijani language pair with the accuracy 78.32%., which is noticeably higher than prior methods
Multivariate Feature Extraction for Prediction of Future Gene Expression Profile
Introduction: The features of a cell can be extracted from its gene expression profile. If the gene expression profiles of future descendant cells are predicted, the features of the future cells are also predicted. The objective of this study was to design an artificial neural network to predict gene expression profiles of descendant cells that will be generated by division/differentiation of hematopoietic stem cells.
Method: The developed neural network takes the parent hematopoietic stem cell’s gene expression profile as input and generates the gene expression profiles of its future descendant cells. A temporal attention was proposed to encode the main time series and a spatial attention was also provided to encode the secondary time series.
Results: To make an acceptable prediction, the gene expression profiles of at least four initial division/differentiation steps must be known. The designed neural network surpasses the existing neural networks in terms of prediction accuracy and number of correctly predicted division/differentiation steps. The proposed scheme can predict hundreds of division/differentiation steps. The proposed scheme’ error in prediction of 1, 4, 16, 64, and 128 division/differentiation steps was 3.04, 3.76, 5.5, 7.83, and 11.06 percent, respectively.
Conclusion: Based on the gene expression profile of a parent hematopoietic stem cell, the gene expression profiles of its descendants can be predicted for hundreds of division/differentiation steps and if necessary, solutions must be sought to encounter future genetic disorders
A novel artificial neural network improves multivariate feature extraction in predicting correlated multivariate time series
The existing multivariate time series prediction schemes are inefficient in extracting intermediate features. This paper proposes an artificial neural network called Feature Path Efficient Multivariate Time Series Prediction (FPEMTSP) to predict the next element of the main time series in the presence of several secondary time series. We propose to generate all the possible combinations of the secondary time series and extract multivariate features by doing the Cartesian product of the main and the secondary time series features. Our calculations prove that the FPEMTSP's complexity and network size are acceptable. We have considered a few internal parameters in FPEMTSP that can be configured to improve the prediction accuracy and adjust the network size. We trained and evaluated FPEMTSP using two public datasets. Our evaluation revealed the optimal values for the internal parameters and showed that FPEMTSP surpasses the existing schemes in terms of prediction accuracy and the number of correctly predicted steps
Application of machine learning techniques for predicting survival in ovarian cancer
Abstract Background Ovarian cancer is the fifth leading cause of mortality among women in the United States. Ovarian cancer is also known as forgotten cancer or silent disease. The survival of ovarian cancer patients depends on several factors, including the treatment process and the prognosis. Methods The ovarian cancer patients’ dataset is compiled from the Surveillance, Epidemiology, and End Results (SEER) database. With the help of a clinician, the dataset is curated, and the most relevant features are selected. Pearson’s second coefficient of skewness test is used to evaluate the skewness of the dataset. Pearson correlation coefficient is also used to investigate the associations between features. Statistical test is utilized to evaluate the significance of the features. Six Machine Learning (ML) models, including K-Nearest Neighbors , Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Adaptive Boosting (AdaBoost), and Extreme Gradient Boosting (XGBoost), are implemented for survival prediction in both classification and regression approaches. An interpretable method, Shapley Additive Explanations (SHAP), is applied to clarify the decision-making process and determine the importance of each feature in prediction. Additionally, DTs of the RF model are displayed to show how the model predicts the survival intervals. Results Our results show that RF (Accuracy = 88.72%, AUC = 82.38%) and XGBoost (Root Mean Squad Error (RMSE)) = 20.61%, R 2 = 0.4667) have the best performance for classification and regression approaches, respectively. Furthermore, using the SHAP method along with extracted DTs of the RF model, the most important features in the dataset are identified. Histologic type ICD-O-3, chemotherapy recode, year of diagnosis, age at diagnosis, tumor stage, and grade are the most important determinant factors in survival prediction. Conclusion To the best of our knowledge, our study is the first study that develops various ML models to predict ovarian cancer patients’ survival on the SEER database in both classification and regression approaches. These ML algorithms also achieve more accurate results and outperform statistical methods. Furthermore, our study is the first study to use the SHAP method to increase confidence and transparency of the proposed models’ prediction for clinicians. Moreover, our developed models, as an automated auxiliary tool, can help clinicians to have a better understanding of the estimated survival as well as important features that affect survival
Resource recommender system performance improvement by exploring similar tags and detecting tags communities
Many researchers have used tag information to improve the performance of
recommendation techniques in recommender systems. Examining the tags of users
will help to get their interests and leads to more accuracy in the
recommendations. Since user-defined tags are chosen freely and without any
restrictions, problems arise in determining their exact meaning and the
similarity of tags. On the other hand, using thesauruses and ontologies to find
the meaning of tags is not very efficient due to their free definition by users
and the use of different languages in many data sets. Therefore, this article
uses the mathematical and statistical methods to determine lexical similarity
and co-occurrence tags solution to assign semantic similarity. On the other
hand, due to the change of users' interests over time this article have
considered the time of tag assignments in co-occurrence tags for determined
similarity of tags. Then the graph is created based on these similarities. For
modeling the interests of the users, the communities of tags are determined by
using community detection methods. So recommendations based on the communities
of tags and similarity between resources are done. The performance of the
proposed method has been done using two criteria of precision and recall based
on evaluations with "Delicious" dataset. The evaluation results show that, the
precision and recall of the proposed method have significantly improved,
compared to the other methods
Early Prediction of Alzheimer’s Disease Using Interpretable Machine Learning Algorithms
Introduction: Alzheimer’s disease is one of the most common neurodegenerative diseases in adults. The progressive nature of Alzheimer’s disease causes widespread damage to the brain, and early diagnosis can manage the disease and slow down its progression effectively.
Method: In this study, a dataset related to the early prediction of Alzheimer’s was used. Spark framework was used for data management and three machine learning algorithms including Naïve Bayes, Decision Tree, and Artificial Neural Networks with the best hyperparameters were implemented and compared. To prevent overfitting and measure the efficiency of the models, a 5-fold cross-validation method was utilized. Furthermore, a method was used for interpreting machine learning black box models.
Results: The decision tree and artificial neural network models obtained 98.61% accuracy and 98.60% F1-Score in the Spark framework including one or three computers. Important features in the decision-making process of the artificial neural network were identified using the interpretability technique. In addition, the computational time required for training the proposed models was calculated through different approaches, and the use of multiple computers was 35.95% faster than a single computer.
Conclusion: With increasing the number of Alzheimer’s disease patients around the world, the need for a decision support system using machine learning algorithms, which can predict the disease early in a huge amount of data, is felt more. Therefore, the machine learning models proposed in this study for early prediction of Alzheimer’s disease as an interpretable auxiliary tool in the decision-making process can help clinicians
COVID-19 diagnosis from routine blood tests using artificial intelligence techniques
Coronavirus disease (COVID-19) is a unique worldwide pandemic. With new mutations of the virus with higher transmission rates, it is imperative to diagnose positive cases as quickly and accurately as possible. Therefore, a fast, accurate, and automatic system for COVID-19 diagnosis can be very useful for clinicians. In this study, seven machine learning and four deep learning models were presented to diagnose positive cases of COVID-19 from three routine laboratory blood tests datasets. Three correlation coefficient methods, i.e., Pearson, Spearman, and Kendall, were used to demonstrate the relevance among samples. A four-fold cross-validation method was used to train, validate, and test the proposed models. In all three datasets, the proposed deep neural network (DNN) model achieved the highest values of accuracy, precision, recall or sensitivity, specificity, F1-Score, AUC, and MCC. On average, accuracy 92.11%, specificity 84.56%, and AUC 92.20% values have been obtained in the first dataset. In the second dataset, on average, accuracy 93.16%, specificity 93.02%, and AUC 93.20% values have been obtained. Finally, in the third dataset, on average, the values of accuracy 92.5%, specificity 85%, and AUC 92.20% have been obtained. In this study, we used a statistical t-test to validate the results. Finally, using artificial intelligence interpretation methods, important and impactful features in the developed model were presented. The proposed DNN model can be used as a supplementary tool for diagnosing COVID-19, which can quickly provide clinicians with highly accurate diagnoses of positive cases in a timely manner.</p