7 research outputs found

    Toward Efficient Intrusion Detection System Using Hybrid Deep Learning Approach

    No full text
    The increased adoption of cloud computing resources produces major loopholes in cloud computing for cybersecurity attacks. An intrusion detection system (IDS) is one of the vital defenses against threats and attacks to cloud computing. Current IDSs encounter two challenges, namely, low accuracy and a high false alarm rate. Due to these challenges, additional efforts are required by network experts to respond to abnormal traffic alerts. To improve IDS efficiency in detecting abnormal network traffic, this work develops an IDS using a recurrent neural network based on gated recurrent units (GRUs) and improved long short-term memory (LSTM) through a computing unit to form Cu-LSTMGRU. The proposed system efficiently classifies the network flow instances as benign or malevolent. This system is examined using the most up-to-date dataset CICIDS2018. To further optimize computational complexity, the dataset is optimized through the Pearson correlation feature selection algorithm. The proposed model is evaluated using several metrics. The results show that the proposed model remarkably outperforms benchmarks by up to 12.045%. Therefore, the Cu-LSTMGRU model provides a high level of symmetry between cloud computing security and the detection of intrusions and malicious attacks

    Effective Intrusion Detection System to Secure Data in Cloud Using Machine Learning

    No full text
    When adopting cloud computing, cybersecurity needs to be applied to detect and protect against malicious intruders to improve the organization’s capability against cyberattacks. Having network intrusion detection with zero false alarm is a challenge. This is due to the asymmetry between informative features and irrelevant and redundant features of the dataset. In this work, a novel machine learning based hybrid intrusion detection system is proposed. It combined support vector machine (SVM) and genetic algorithm (GA) methodologies with an innovative fitness function developed to evaluate system accuracy. This system was examined using the CICIDS2017 dataset, which contains normal and most up-to-date common attacks. Both algorithms, GA and SVM, were executed in parallel to achieve two optimal objectives simultaneously: obtaining the best subset of features with maximum accuracy. In this scenario, an SVM was employed using different values of hyperparameters of the kernel function, gamma, and degree. The results were benchmarked with KDD CUP 99 and NSL-KDD. The results showed that the proposed model remarkably outperformed these benchmarks by up to 5.74%. This system will be effective in cloud computing, as it is expected to provide a high level of symmetry between information security and detection of attacks and malicious intrusion

    A Study on the Impact of Integrating Reinforcement Learning for Channel Prediction and Power Allocation Scheme in MISO-NOMA System

    No full text
    In this study, the influence of adopting Reinforcement Learning (RL) to predict the channel parameters for user devices in a Power Domain Multi-Input Single-Output Non-Orthogonal Multiple Access (MISO-NOMA) system is inspected. In the channel prediction-based RL approach, the Q-learning algorithm is developed and incorporated into the NOMA system so that the developed Q-model can be employed to predict the channel coefficients for every user device. The purpose of adopting the developed Q-learning procedure is to maximize the received downlink sum-rate and decrease the estimation loss. To satisfy this aim, the developed Q-algorithm is initialized using different channel statistics and then the algorithm is updated based on the interaction with the environment in order to approximate the channel coefficients for each device. The predicted parameters are utilized at the receiver side to recover the desired data. Furthermore, based on maximizing the sum-rate of the examined user devices, the power factors for each user can be deduced analytically to allocate the optimal power factor for every user device in the system. In addition, this work inspects how the channel prediction based on the developed Q-learning model, and the power allocation policy, can both be incorporated for the purpose of multiuser recognition in the examined MISO-NOMA system. Simulation results, based on several performance metrics, have demonstrated that the developed Q-learning algorithm can be a competitive algorithm for channel estimation when compared to different benchmark schemes such as deep learning-based long short-term memory (LSTM), RL based actor-critic algorithm, RL based state-action-reward-state-action (SARSA) algorithm, and standard channel estimation scheme based on minimum mean square error procedure

    Protecting Web Applications from Cross-Site Scripting Attacks

    No full text
    Existence of cross-site scripting (XSS) vulnerability can be traced back to 1995 during early days of Internet penetration. JavaScript, a programming language developed by Netscape, came into being around the same time. The noble intention of this programming language was for designing web applications to be more interactive. However, cyber criminals also learned how to trick users to load malicious scripts into websites, thus allowing them to access confidential data or compromise services. The enormity of such attacks promoted some organizations to engage in monitoring of XSS attacks and researching on new ways to defeat attacks that are similar to XSS worm on MySpace.com social networking site in 2005. The primary Focus in this aper is to try to avoid execution of XSS attacks by providing proper validations and methods to clean the user input from any script tags. XSS attacks can be minimized by proper handling of user input in a web application, which means that’s validating the input provided by the user and stripping it of any of harmful code or tags

    Advanced Feature-Selection-Based Hybrid Ensemble Learning Algorithms for Network Intrusion Detection Systems

    No full text
    As cyber-attacks become remarkably sophisticated, effective Intrusion Detection Systems (IDSs) are needed to monitor computer resources and to provide alerts regarding unusual or suspicious behavior. Despite using several machine learning (ML) and data mining methods to achieve high effectiveness, these systems have not proven ideal. Current intrusion detection algorithms suffer from high dimensionality, redundancy, meaningless data, high error rate, false alarm rate, and false-negative rate. This paper proposes a novel Ensemble Learning (EL) algorithm-based network IDS model. The efficient feature selection is attained via a hybrid of Correlation Feature Selection coupled with Forest Panelized Attributes (CFS–FPA). The improved intrusion detection involves exploiting AdaBoosting and bagging ensemble learning algorithms to modify four classifiers: Support Vector Machine, Random Forest, Naïve Bayes, and K-Nearest Neighbor. These four enhanced classifiers have been applied first as AdaBoosting and then as bagging, using the aggregation technique through the voting average technique. To provide better benchmarking, both binary and multi-class classification forms are used to evaluate the model. The experimental results of applying the model to CICIDS2017 dataset achieved promising results of 99.7%accuracy, a 0.053 false-negative rate, and a 0.004 false alarm rate. This system will be effective for information technology-based organizations, as it is expected to provide a high level of symmetry between information security and detection of attacks and malicious intrusion

    Advanced Feature-Selection-Based Hybrid Ensemble Learning Algorithms for Network Intrusion Detection Systems

    No full text
    As cyber-attacks become remarkably sophisticated, effective Intrusion Detection Systems (IDSs) are needed to monitor computer resources and to provide alerts regarding unusual or suspicious behavior. Despite using several machine learning (ML) and data mining methods to achieve high effectiveness, these systems have not proven ideal. Current intrusion detection algorithms suffer from high dimensionality, redundancy, meaningless data, high error rate, false alarm rate, and false-negative rate. This paper proposes a novel Ensemble Learning (EL) algorithm-based network IDS model. The efficient feature selection is attained via a hybrid of Correlation Feature Selection coupled with Forest Panelized Attributes (CFS–FPA). The improved intrusion detection involves exploiting AdaBoosting and bagging ensemble learning algorithms to modify four classifiers: Support Vector Machine, Random Forest, Naïve Bayes, and K-Nearest Neighbor. These four enhanced classifiers have been applied first as AdaBoosting and then as bagging, using the aggregation technique through the voting average technique. To provide better benchmarking, both binary and multi-class classification forms are used to evaluate the model. The experimental results of applying the model to CICIDS2017 dataset achieved promising results of 99.7%accuracy, a 0.053 false-negative rate, and a 0.004 false alarm rate. This system will be effective for information technology-based organizations, as it is expected to provide a high level of symmetry between information security and detection of attacks and malicious intrusion

    Modeling river water dissolved organic matter using ensemble computing and genetic programming techniques

    No full text
    Dissolved organic matter (DOM) plays a diverse role in aquatic ecosystems and is a key participant in global carbon budgets; thus, precise simulation and modeling of DOM concentrations in rivers and streams is critical in hydro-environmental projects. Among various modeling strategies, data-driven Machine Learning (ML) approaches ‒ and particularly Ensemble Machine Learning (EML) models ‒ have proven their fair capabilities in simulating environmental issues in aquatic media based on the information available from a limited number of physicochemical and biological parameters of water. In this regard, several MLs (such as Support Vector Regression, SVR, and Extreme Learning Machine, ELM), two EMLs (e.g., Random Forests, RF, and Boosted Trees, BTs), as well as Gene Expression Programming (GEP), are evaluated for predicting fluorescent Dissolved Organic Matter (fDOM) in the Caloosahatchee River in Florida. The modeling strategy of fDOM was based on constructing regular and ensemble ML models using seven quantitative and qualitative independent parameters (flow rate, temperature, specific conductance, dissolved oxygen, pH, turbidity, and nitrate recorded from 2017 to 2019) after all being introduced as influential parameters on the target variable (fDOM) using the best subset regression technique. Based on the k-fold cross-validation method (k = 4), the applied regular MLs (SVR, ELM, and GEP) provided better performance than the traditional multiple linear regression model (on average, 6.8 % improvement in RMSE). However, the results showed that the EML models (RF and BT) outperformed the regular MLs (on average, 7.2 % improvement in RMSE) in fDOM prediction
    corecore