47 research outputs found

    Neutrosophic rule-based prediction system for toxicity effects assessment of biotransformed hepatic drugs

    Get PDF
    Measuring toxicity is an important step in drug development. However, the current experimental meth- ods which are used to estimate the drug toxicity are expensive and need high computational efforts. Therefore, these methods are not suitable for large-scale evaluation of drug toxicity. As a consequence, there is a high demand to implement computational models that can predict drug toxicity risks. In this paper, we used a dataset that consists of 553 drugs that biotransformed in the liver

    A COMPARATIVE STUDY ON PERFORMANCE OF BASIC AND ENSEMBLE CLASSIFIERS WITH VARIOUS DATASETS

    Get PDF
    Classification plays a critical role in machine learning (ML) systems for processing images, text and high -dimensional data. Predicting class labels from training data is the primary goal of classification. An optimal model for a particular classification problem is chosen on the basis of the model's performance and execution time. This paper compares and analyses the performance of basic as well as ensemble classifiers utilizing 10 -fold cross validation and also discusses their essential concepts, advantages, and disadvantages. In this study five basic classifiers namely NaĂŻve Bayes (NB), Multi-layer Perceptron (MLP), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF) and the ensemble of all the five classifiers along with few more combinations are compared with five University of California Irvine (UCI) ML Repository datasets and a Diabetes Health Indicators dataset from kaggle repository. To analyze and compare the performance of classifiers, evaluation metrics like Accuracy, Recall, Precision, Area Under Curve (AUC) and F-Score are used. Experimental results showed that SVM performs best on two out of the six datasets (Diabetes Health Indicators and waveform), RF performs best for Arrhythmia, Sonar, Tic-tac-toe datasets, and the best ensemble combination is found to be DT+SVM+RF on Ionosphere dataset having respective accuracies 72.58%, 90.38%, 81.63%, 73.59%, 94.78% and 94.01% and the proposed ensemble combinations outperformed over the conventional models for few datasets

    Intuitionistic Fuzzy Broad Learning System: Enhancing Robustness Against Noise and Outliers

    Full text link
    In the realm of data classification, broad learning system (BLS) has proven to be a potent tool that utilizes a layer-by-layer feed-forward neural network. It consists of feature learning and enhancement segments, working together to extract intricate features from input data. The traditional BLS treats all samples as equally significant, which makes it less robust and less effective for real-world datasets with noises and outliers. To address this issue, we propose the fuzzy BLS (F-BLS) model, which assigns a fuzzy membership value to each training point to reduce the influence of noises and outliers. In assigning the membership value, the F-BLS model solely considers the distance from samples to the class center in the original feature space without incorporating the extent of non-belongingness to a class. We further propose a novel BLS based on intuitionistic fuzzy theory (IF-BLS). The proposed IF-BLS utilizes intuitionistic fuzzy numbers based on fuzzy membership and non-membership values to assign scores to training points in the high-dimensional feature space by using a kernel function. We evaluate the performance of proposed F-BLS and IF-BLS models on 44 UCI benchmark datasets across diverse domains. Furthermore, Gaussian noise is added to some UCI datasets to assess the robustness of the proposed F-BLS and IF-BLS models. Experimental results demonstrate superior generalization performance of the proposed F-BLS and IF-BLS models compared to baseline models, both with and without Gaussian noise. Additionally, we implement the proposed F-BLS and IF-BLS models on the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset, and promising results showcase the models effectiveness in real-world applications. The proposed methods offer a promising solution to enhance the BLS frameworks ability to handle noise and outliers

    A Risk-Based IoT Decision-Making Framework Based on Literature Review with Human Activity Recognition Case Studies

    Get PDF
    The Internet of Things (IoT) is a key and growing technology for many critical real-life applications, where it can be used to improve decision making. The existence of several sources of uncertainty in the IoT infrastructure, however, can lead decision makers into taking inappropriate actions. The present work focuses on proposing a risk-based IoT decision-making framework in order to effectively manage uncertainties in addition to integrating domain knowledge in the decision-making process. A structured literature review of the risks and sources of uncertainty in IoT decision-making systems is the basis for the development of the framework and Human Activity Recognition (HAR) case studies. More specifically, as one of the main targeted challenges, the potential sources of uncertainties in an IoT framework, at different levels of abstraction, are firstly reviewed and then summarized. The modules included in the framework are detailed, with the main focus given to a novel risk-based analytics module, where an ensemble-based data analytic approach, called Calibrated Random Forest (CRF), is proposed to extract useful information while quantifying and managing the uncertainty associated with predictions, by using confidence scores. Its output is subsequently integrated with domain knowledge-based action rules to perform decision making in a cost-sensitive and rational manner. The proposed CRF method is firstly evaluated and demonstrated on a HAR scenario in a Smart Home environment in case study I and is further evaluated and illustrated with a remote health monitoring scenario for a diabetes use case in case study II. The experimental results indicate that using the framework’s raw sensor data can be converted into meaningful actions despite several sources of uncertainty. The comparison of the proposed framework to existing approaches highlights the key metrics that make decision making more rational and transparent

    Applying FAHP to Improve the Performance Evaluation Reliability and Validity of Software Defect Classifiers

    Get PDF
    Today’s Software complexity makes developing defect-free software almost impossible. On an average, billions of dollars are lost every year because of software defects in the United States alone, while the global loss is much higher. Consequently, developing classifiers to classify software modules into defective and non-defective before software releases, has attracted a great interest in academia and the software industry alike. Although many classifiers have been proposed, none has been proven superior to others. The major reason is that while a research shows that classifier-A is better than classifier-B, we can find other research coming to a diametrically opposite conclusion. These conflicts are usually triggered when researchers report results using their preferred performance quality measures such as recall and precision. Although this approach is valid, it does not examine all possible facets of classifiers’ performance characteristics. Thus, performance evaluation might improve or deteriorate if researchers choose other performance measures. As a result, software developers usually struggle to select the most suitable classifier to use in their projects. The goal of this dissertation is to apply the Fuzzy Analytical Hierarchy Process (FAHP) as a popular multi-criteria decision-making technique to overcome these inconsistencies in research outcomes. This evaluation framework incorporates a wider spectrum of performance measures to evaluate classifiers’ performance, rather than relying on selected, preferred measures. The results show that this approach will increase software developers’ confidence in research outcomes, help them in avoiding false conclusions and indicate reasonable boundaries for them. We utilized 22 popular performance measures and 11 software defect classifiers. The analysis was carried out using KNIME data mining platform and 12 software defect data sets provided by NASA Metrics Data Program (MDP) repository

    A Hybrid Approch Tomato Diseases Detection At Early Stage

    Get PDF
     In traditional farming practice, skilled people are hired to manually examine the land and detect the presence of diseases through visual inspection, but the visual inspection method is ineffective. High accuracy of disease detection is one of the most important factors in crop production and reducing crop losses. Meanwhile, the evolution of deep convolutional neural networks for image classification has rapidly improved the accuracy of object detection, classification and system recognition. Previous tomato detection methods based on faster region convolutional neural network (RCNN) are less efficient in terms of accuracy. Researchers have used many methods to detect tomato leaf diseases, but their accuracy is not optimal. This study presents a Faster RCNN-based deep learning model for the detection of three tomato leaf diseases (late blight, mosaic virus, and leaf septoria). The methodology presented in this paper consists of four main steps. The first step is pre-processing. At the second stage, segmentation was done using fuzzy C Means. In the third step, feature extraction was performed with ResNet 50. In the fourth step, classification was performed with Faster RCNN to detect tomato leaf diseases. Two evaluation parameters precision and accuracy are used to compare the proposed model with other existing approaches. The proposed model has the highest accuracy of 98.6% in detecting tomato leaf diseases. In addition, the work can be extended to train the model for other types of tomato diseases, such as leaf mold, spider mites, as well as to detect diseases of other crops, such as potatoes, peanuts, etc

    Intrusion Detection: Embedded Software Machine Learning and Hardware Rules Based Co-Designs

    Get PDF
    Security of innovative technologies in future generation networks such as (Cyber Physical Systems (CPS) and Wi-Fi has become a critical universal issue for individuals, economy, enterprises, organizations and governments. The rate of cyber-attacks has increased dramatically, and the tactics used by the attackers are continuing to evolve and have become ingenious during the attacks. Intrusion Detection is one of the solutions against these attacks. One approach in designing an intrusion detection system (IDS) is software-based machine learning. Such approach can predict and detect threats before they result in major security incidents. Moreover, despite the considerable research in machine learning based designs, there is still a relatively small body of literature that is concerned with imbalanced class distributions from the intrusion detection system perspective. In addition, it is necessary to have an effective performance metric that can compare multiple multi-class as well as binary-class systems with respect to class distribution. Furthermore, the expectant detection techniques must have the ability to identify real attacks from random defects, ingrained defects in the design, misconfigurations of the system devices, system faults, human errors, and software implementation errors. Moreover, a lightweight IDS that is small, real-time, flexible and reconfigurable enough to be used as permanent elements of the system's security infrastructure is essential. The main goal of the current study is to design an effective and accurate intrusion detection framework with minimum features that are more discriminative and representative. Three publicly available datasets representing variant networking environments are adopted which also reflect realistic imbalanced class distributions as well as updated attack patterns. The presented intrusion detection framework is composed of three main modules: feature selection and dimensionality reduction, handling imbalanced class distributions, and classification. The feature selection mechanism utilizes searching algorithms and correlation based subset evaluation techniques, whereas the feature dimensionality reduction part utilizes principal component analysis and auto-encoder as an instance of deep learning. Various classifiers, including eight single-learning classifiers, four ensemble classifiers, one stacked classifier, and five imbalanced class handling approaches are evaluated to identify the most efficient and accurate one(s) for the proposed intrusion detection framework. A hardware-based approach to detect malicious behaviors of sensors and actuators embedded in medical devices, in which the safety of the patient is critical and of utmost importance, is additionally proposed. The idea is based on a methodology that transforms a device's behavior rules into a state machine to build a Behavior Specification Rules Monitoring (BSRM) tool for four medical devices. Simulation and synthesis results demonstrate that the BSRM tool can effectively identify the expected normal behavior of the device and detect any deviation from its normal behavior. The performance of the BSRM approach has also been compared with a machine learning based approach for the same problem. The FPGA module of the BSRM can be embedded in medical devices as an IDS and can be further integrated with the machine learning based approach. The reconfigurable nature of the FPGA chip adds an extra advantage to the designed model in which the behavior rules can be easily updated and tailored according to the requirements of the device, patient, treatment algorithm, and/or pervasive healthcare application

    A Methodology for Detecting Credit Card Fraud

    Get PDF
    Fraud detection has appertained to many industries such as banking, retails, financial services, healthcare, etc. As we know, fraud detection is a set of campaigns undertaken to avert the acquisition of illegal means to obtain money or property under false pretense. With an unlimited and growing number of ways fraudsters commit fraud crimes, detecting online fraud was so tricky to achieve. This research work aims to examine feasible ways to identify credit card fraudulent activities that negatively impact financial institutes. In the United States, an average of U.S consumers lost a median of $429 from credit card fraud in 2017, according to “CPO magazine. Almost 79% of consumers who experienced credit card fraud did not suffer any financial impact whatsoever” [35]. One of the questions is, who is paying for these losses if not the consumers? The answer to this question is the financial institutions. According to the Federal Trade Commission report, credit card theft has increased by 44.6% from 2019 to 2020, and the amount of money lost to credit card fraud in the year 2020 is about 149 million in total loss. Without any delay, financial institutes should implement technology safeguards and cybersecurity to decrease the impact of credit card fraud activities. To compare our chosen machine learning algorithms with machine learning techniques that already exist, we carried out a comparative analysis and we were able to determine which algorithm can best predict fraudulent transactions by recognizing a pattern that is different from others. We trained our algorithms over two sampling methods (undersampling and oversampling) of the credit card fraud dataset and, the best algorithm is drawn to predict frauds. AUC score and other metrics was used to compare and contrast the results of our algorithms. The following results are concluded based on our study: 1. Our study proposed algorithms such as Random Forest, Decision Trees and Xgboost, K-Means, Logistic Regression and Neural Network have performed better than other machine learning algorithms researchers have used in previous studies to predict credit card frauds. 2. Our ensemble tree algorithms such as Random Forest, Decision Trees and Xgboost came out to be the best model that can predict credit card fraud with AUC score of 1.00%, 0.99% and 0.99% respectively. 3. The best algorithm for this study shows a lot of improvements with the oversampling dataset with overall performance of 1.00% AUC score

    Cyber Security of Critical Infrastructures

    Get PDF
    Critical infrastructures are vital assets for public safety, economic welfare, and the national security of countries. The vulnerabilities of critical infrastructures have increased with the widespread use of information technologies. As Critical National Infrastructures are becoming more vulnerable to cyber-attacks, their protection becomes a significant issue for organizations as well as nations. The risks to continued operations, from failing to upgrade aging infrastructure or not meeting mandated regulatory regimes, are considered highly significant, given the demonstrable impact of such circumstances. Due to the rapid increase of sophisticated cyber threats targeting critical infrastructures with significant destructive effects, the cybersecurity of critical infrastructures has become an agenda item for academics, practitioners, and policy makers. A holistic view which covers technical, policy, human, and behavioural aspects is essential to handle cyber security of critical infrastructures effectively. Moreover, the ability to attribute crimes to criminals is a vital element of avoiding impunity in cyberspace. In this book, both research and practical aspects of cyber security considerations in critical infrastructures are presented. Aligned with the interdisciplinary nature of cyber security, authors from academia, government, and industry have contributed 13 chapters. The issues that are discussed and analysed include cybersecurity training, maturity assessment frameworks, malware analysis techniques, ransomware attacks, security solutions for industrial control systems, and privacy preservation methods

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic
    corecore