36 research outputs found

    A Survey on Recent Trends of PIO and Its Variants Applied for Motion Planning of Dynamic Agents

    Get PDF
    Pigeon Inspired Optimization (PIO) algorithm is gaining popularity since its development due to faster convergence ability with great efficiencies when compared with other bio-inspired algorithms. The navigation capability of homing pigeons has been precisely used in Pigeon Inspired Optimization algorithm and continuous advancement in existing algorithms is making it more suitable for complex optimization problems in various fields. The main theme of this survey paper is to introduce the basics of PIO along with technical advancements of PIO for the motion planning techniques of dynamic agents. The survey also comprises of findings and limitations of proposed work since its development to help the research scholar around the world for particular algorithm selection especially for motion planning. This survey might be extended up to application based in order to understand the importance of algorithm in future studies

    An Aggregated Mutual Information Based Feature Selection with Machine Learning Methods for Enhancing IoT Botnet Attack Detection

    Get PDF
    Due to the wide availability and usage of connected devices in Internet of Things (IoT) networks, the number of attacks on these networks is continually increasing. A particularly serious and dangerous type of attack in the IoT environment is the botnet attack, where the attackers can control the IoT systems to generate enormous networks of “bot” devices for generating malicious activities. To detect this type of attack, several Intrusion Detection Systems (IDSs) have been proposed for IoT networks based on machine learning and deep learning methods. As the main characteristics of IoT systems include their limited battery power and processor capacity, maximizing the efficiency of intrusion detection systems for IoT networks is still a research challenge. It is important to provide efficient and effective methods that use lower computational time and have high detection rates. This paper proposes an aggregated mutual information-based feature selection approach with machine learning methods to enhance detection of IoT botnet attacks. In this study, the N-BaIoT benchmark dataset was used to detect botnet attack types using real traffic data gathered from nine commercial IoT devices. The dataset includes binary and multi-class classifications. The feature selection method incorporates Mutual Information (MI) technique, Principal Component Analysis (PCA) and ANOVA f-test at finely-granulated detection level to select the relevant features for improving the performance of IoT Botnet classifiers. In the classification step, several ensemble and individual classifiers were used, including Random Forest (RF), XGBoost (XGB), Gaussian Naïve Bayes (GNB), k-Nearest Neighbor (k-NN), Logistic Regression (LR) and Support Vector Machine (SVM). The experimental results showed the efficiency and effectiveness of the proposed approach, which outperformed other techniques using various evaluation metrics

    The effect of Kernel functions on cryptocurrency prediction using support vector machines

    Get PDF
    Forecastinginthefinancialsectorhasproventobeahighlyimportant area of study in the science of Computational Intelligence (CI). Furthermore, the availability of social media platforms contributes to the advancement of SVM research and the selection of SVM parameters. Using SVM kernel functions, this study examines the four kernel functions available: Linear, Radial Basis Gaussian (RBF), Polynomial, and Sigmoid kernels, for the purpose of cryptocurrency and foreign exchange market prediction. The available technical numerical data, senti- ment data, and a technical indicator were used in this experimental research, which was conducted in a controlled environment. The cost and epsilon-SVM regression techniques are both being utilised, and they are both being performed across the five datasets in this study. On the basis of three performance measures, which are the MAE, MSE, and RMSE, the results have been compared and assessed. The forecasting models developed in this research are used to predict all of the out- comes. The SVM-RBF kernel forecasting model, which has outperformed other SVM-kernel models in terms of error rate generated, are presented as a conclusion to this study

    Quasi-Identifiers Recognition Algorithm for Privacy Preservation of Cloud Data Based on Risk Re-Identification

    Get PDF
    Cloud computing plays an essential role as a source for outsourcing data to perform mining operations or other data processing, especially for data owners who do not have sufficient resources or experience to execute data mining techniques. However, the privacy of outsourced data is a serious concern. Most data owners are using anonymization-based techniques to prevent identity and attribute disclosures to avoid privacy leakage before outsourced data for mining over the cloud. In addition, data collection and dissemination in a resource-limited network such as sensor cloud require efficient methods to reduce privacy leakage. The main issue that caused identity disclosure is Quasi-Identifiers (QIDs) linking. But most researchers of anonymization methods ignore the identification of proper QIDs. This reduces the validity of the used anonymization methods and may thus lead to a failure of the anonymity process. This paper introduces a new quasi-identifier recognition algorithm that reduces identity disclosure resulted from QIDs linking. The proposed algorithm is comprised of two main stages: (1) Attributes Classification (or QIDs Recognition), and (2) QID's-Dimension Identification. The algorithm works based on the re-identification of risk rate for all attributes and the dimension of QIDs where it determines the proper QIDs and their suitable dimensions. The proposed algorithm was tested on a real dataset. The results demonstrated that the proposed algorithm significantly reduces privacy leakage and maintaining the data utility compared to recent related algorithms

    Lightweight Anomaly Detection Scheme Using Incremental Principal Component Analysis and Support Vector Machine

    Get PDF
    Wireless Sensors Networks have been the focus of significant attention from research and development due to their applications of collecting data from various fields such as smart cities, power grids, transportation systems, medical sectors, military, and rural areas. Accurate and reliable measurements for insightful data analysis and decision-making are the ultimate goals of sensor networks for critical domains. However, the raw data collected by WSNs usually are not reliable and inaccurate due to the imperfect nature of WSNs. Identifying misbehaviours or anomalies in the network is important for providing reliable and secure functioning of the network. However, due to resource constraints, a lightweight detection scheme is a major design challenge in sensor networks. This paper aims at designing and developing a lightweight anomaly detection scheme to improve efficiency in terms of reducing the computational complexity and communication and improving memory utilization overhead while maintaining high accuracy. To achieve this aim, oneclass learning and dimension reduction concepts were used in the design. The One-Class Support Vector Machine (OCSVM) with hyper-ellipsoid variance was used for anomaly detection due to its advantage in classifying unlabelled and multivariate data. Various One-Class Support Vector Machine formulations have been investigated and Centred-Ellipsoid has been adopted in this study due to its effectiveness. Centred-Ellipsoid is the most effective kernel among studies formulations. To decrease the computational complexity and improve memory utilization, the dimensions of the data were reduced using the Candid Covariance-Free Incremental Principal Component Analysis (CCIPCA) algorithm. Extensive experiments were conducted to evaluate the proposed lightweight anomaly detection scheme. Results in terms of detection accuracy, memory utilization, computational complexity, and communication overhead show that the proposed scheme is effective and efficient compared few existing schemes evaluated. The proposed anomaly detection scheme achieved the accuracy higher than 98%, with O(nd) memory utilization and no communication overhead

    Route Path Selection Optimization Scheme Based Link Quality Estimation and Critical Switch Awareness for Software Defined Networks

    Get PDF
    Software-defined network (SDN) is a new paradigm that decouples the control plane and data plane. This offered a more flexible way to efficiently manage the network. However, the increasing number of traffics due to the proliferation of the Internet of Things (IoT) devices also increase the number of flow arrival which in turn causes flow rules to change more often, and similarly, path setup requests increased. These events required route path computation activities to take place immediately to cope with the new network changes. Searching for an optimal route might be costly in terms of the time required to calculate a new path and update the corresponding switches. However, the current path selection schemes considered only single routing metrics either link or switch operation. Incorporating link quality and switch’s role during path selection decisions have not been considered. This paper proposed Route Path Selection Optimization (RPSO) with multi-constraint. RPSO introduced joint parameters based on link and switches such as Link Latency (LL), Link Delivery Ratio (LDR), and Critical Switch Frequency Score (CWFscore). These metrics encourage path selection with better link quality and a minimal number of critical switches. The experimental results show that the proposed scheme reduced path stretch by 37%, path setup latency by 73% thereby improving throughput by 55.73%, and packet delivery ratio by 12.5% compared to the baseline work

    A Fuzzy-Based Context-Aware Misbehavior Detecting Scheme for Detecting Rogue Nodes in Vehicular Ad Hoc Network

    Get PDF
    A vehicular ad hoc network (VANET) is an emerging technology that improves road safety, traffic efficiency, and passenger comfort. VANETs’ applications rely on co-operativeness among vehicles by periodically sharing their context information, such as position speed and acceleration, among others, at a high rate due to high vehicles mobility. However, rogue nodes, which exploit the co-operativeness feature and share false messages, can disrupt the fundamental operations of any potential application and cause the loss of people’s lives and properties. Unfortunately, most of the current solutions cannot effectively detect rogue nodes due to the continuous context change and the inconsideration of dynamic data uncertainty during the identification. Although there are few context-aware solutions proposed for VANET, most of these solutions are data-centric. A vehicle is considered malicious if it shares false or inaccurate messages. Such a rule is fuzzy and not consistently accurate due to the dynamic uncertainty of the vehicular context, which leads to a poor detection rate. To this end, this study proposed a fuzzy-based context-aware detection model to improve the overall detection performance. A fuzzy inference system is constructed to evaluate the vehicles based on their generated information. The output of the proposed fuzzy inference system is used to build a dynamic context reference based on the proposed fuzzy inference system. Vehicles are classified into either honest or rogue nodes based on the deviation of their evaluation scores calculated using the proposed fuzzy inference system from the context reference. Extensive experiments were carried out to evaluate the proposed model. Results show that the proposed model outperforms the state-of-the-art models. It achieves a 7.88% improvement in the overall performance, while a 16.46% improvement is attained for detection rate compared to the state-of-the-art model. The proposed model can be used to evict the rogue nodes, and thus improve the safety and traffic efficiency of crewed or uncrewed vehicles designed for different environments, land, naval, or air

    Application of Machine Learning to Predict COVID-19 Spread via an Optimized BPSO Model

    Get PDF
    During the pandemic of the coronavirus disease (COVID-19), statistics showed that the number of affected cases differed from one country to another and also from one city to another. Therefore, in this paper, we provide an enhanced model for predicting COVID-19 samples in different regions of Saudi Arabia (high-altitude and sea-level areas). The model is developed using several stages and was successfully trained and tested using two datasets that were collected from Taif city (high-altitude area) and Jeddah city (sea-level area) in Saudi Arabia. Binary particle swarm optimization (BPSO) is used in this study for making feature selections using three different machine learning models, i.e., the random forest model, gradient boosting model, and naive Bayes model. A number of predicting evaluation metrics including accuracy, training score, testing score, F-measure, recall, precision, and receiver operating characteristic (ROC) curve were calculated to verify the performance of the three machine learning models on these datasets. The experimental results demonstrated that the gradient boosting model gives better results than the random forest and naive Bayes models with an accuracy of 94.6% using the Taif city dataset. For the dataset of Jeddah city, the results demonstrated that the random forest model outperforms the gradient boosting and naive Bayes models with an accuracy of 95.5%. The dataset of Jeddah city achieved better results than the dataset of Taif city in Saudi Arabia using the enhanced model for the term of accuracy

    A New Intrusion Detection System for the Internet of Things via Deep Convolutional Neural Network and Feature Engineering

    Get PDF
    The Internet of Things (IoT) is a widely used technology in automated network systems across the world. The impact of the IoT on different industries has occurred in recent years. Many IoT nodes collect, store, and process personal data, which is an ideal target for attackers. Several researchers have worked on this problem and have presented many intrusion detection systems (IDSs). The existing system has difficulties in improving performance and identifying subcategories of cyberattacks. This paper proposes a deep-convolutional-neural-network (DCNN)-based IDS. A DCNN consists of two convolutional layers and three fully connected dense layers. The proposed model aims to improve performance and reduce computational power. Experiments were conducted utilizing the IoTID20 dataset. The performance analysis of the proposed model was carried out with several metrics, such as accuracy, precision, recall, and F1-score. A number of optimization techniques were applied to the proposed model in which Adam, AdaMax, and Nadam performance was optimum. In addition, the proposed model was compared with various advanced deep learning (DL) and traditional machine learning (ML) techniques. All experimental analysis indicates that the accuracy of the proposed approach is high and more robust than existing DL-based algorithms
    corecore