9 research outputs found

    An innovative network intrusion detection system (NIDS): Hierarchical deep learning model based on Unsw-Nb15 dataset

    Get PDF
    With the increasing prevalence of network intrusions, the development of effective network intrusion detection systems (NIDS) has become crucial. In this study, we propose a novel NIDS approach that combines the power of long short-term memory (LSTM) and attention mechanisms to analyze the spatial and temporal features of network traffic data. We utilize the benchmark UNSW-NB15 dataset, which exhibits a diverse distribution of patterns, including a significant disparity in the size of the training and testing sets. Unlike traditional machine learning techniques like support vector machines (SVM) and k-nearest neighbors (KNN) that often struggle with limited feature sets and lower accuracy, our proposed model overcomes these limitations. Notably, existing models applied to this dataset typically require manual feature selection and extraction, which can be time-consuming and less precise. In contrast, our model achieves superior results in binary classification by leveraging the advantages of LSTM and attention mechanisms. Through extensive experiments and evaluations with state-of-the-art ML/DL models, we demonstrate the effectiveness and superiority of our proposed approach. Our findings highlight the potential of combining LSTM and attention mechanisms for enhanced network intrusion detection

    Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects

    No full text
    Transformer models have emerged as frontrunners in the field of natural language processing, primarily due to their adept use of self-attention mechanisms to grasp the semantic linkages between words in sequences. Despite their strengths, these models often face challenges in single-task learning scenarios, particularly when it comes to delivering top-notch performance and crafting strong latent feature representations. This challenge is more pronounced in the context of smaller datasets and is particularly acute for under-resourced languages such as Arabic. In light of these challenges, this study introduces a novel methodology for text classification of Arabic texts. This method harnesses the newly developed Reverse Positional Encoding (RPE) technique. It adopts an inductive-transfer learning (ITL) framework combined with a switching self-attention shared encoder, thereby increasing the model’s adaptability and improving its sentence representation accuracy. The integration of Mixture of Experts (MoE) and RPE techniques empowers the model to process longer sequences more effectively. This enhancement is notably beneficial for Arabic text classification, adeptly supporting both the intricate five-point and the simpler ternary classification tasks. The empirical evidence points to its outstanding performance, achieving accuracy rates of 87.20% for the HARD dataset, 72.17% for the BRAD dataset, and 86.89% for the LABR dataset, as evidenced by the assessments conducted on these datasets

    Advanced Deep Learning Model for Predicting the Academic Performances of Students in Educational Institutions

    No full text
    Educational institutions are increasingly focused on supporting students who may be facing academic challenges, aiming to enhance their educational outcomes through targeted interventions. Within this framework, leveraging advanced deep learning techniques to develop recommendation systems becomes essential. These systems are designed to identify students at risk of underperforming by analyzing patterns in their historical academic data, thereby facilitating personalized support strategies. This research introduces an innovative deep learning model tailored for pinpointing students in need of academic assistance. Utilizing a Gated Recurrent Neural Network (GRU) architecture, the model is rich with features such as a dense layer, max-pooling layer, and the ADAM optimization method used to optimize performance. The effectiveness of this model was tested using a comprehensive dataset containing 15,165 records of student assessments collected across several academic institutions. A comparative analysis with existing educational recommendation models, like Recurrent Neural Network (RNN), AdaBoost, and Artificial Immune Recognition System v2, highlights the superior accuracy of the proposed GRU model, which achieved an impressive overall accuracy of 99.70%. This breakthrough underscores the model’s potential in aiding educational institutions to proactively support students, thereby mitigating the risks of underachievement and dropout

    A Multitask-Based Neural Machine Translation Model with Part-of-Speech Tags Integration for Arabic Dialects

    No full text
    The statistical machine translation for the Arabic language integrates external linguistic resources such as part-of-speech tags. The current research presents a Bidirectional Long Short-Term Memory (Bi-LSTM)—Conditional Random Fields (CRF) segment-level Arabic Dialect POS tagger model, which will be integrated into the Multitask Neural Machine Translation (NMT) model. The proposed solution for NMT is based on the recurrent neural network encoder-decoder NMT model that has been introduced recently. The study has proposed and developed a unified Multitask NMT model that shares an encoder between the two tasks; Arabic Dialect (AD) to Modern Standard Arabic (MSA) translation task and the segment-level POS tagging tasks. A shared layer and an invariant layer are shared between the translation tasks. By training translation tasks and POS tagging task alternately, the proposed model can leverage the characteristic information and improve the translation quality from Arabic dialects to Modern Standard Arabic. The experiments are conducted from Levantine Arabic (LA) to MSA and Maghrebi Arabic (MA) to MSA translation tasks. As an additional linguistic resource, the segment-level part-of-speech tags for Arabic dialects were also exploited. Experiments suggest that translation quality and the performance of POS tagger were improved with the implementation of multitask learning approach

    A Neural Machine Translation Model for Arabic Dialects That Utilizes Multitask Learning (MTL)

    No full text
    In this research article, we study the problem of employing a neural machine translation model to translate Arabic dialects to Modern Standard Arabic. The proposed solution of the neural machine translation model is prompted by the recurrent neural network-based encoder-decoder neural machine translation model that has been proposed recently, which generalizes machine translation as sequence learning problems. We propose the development of a multitask learning (MTL) model which shares one decoder among language pairs, and every source language has a separate encoder. The proposed model can be applied to limited volumes of data as well as extensive amounts of data. Experiments carried out have shown that the proposed MTL model can ensure a higher quality of translation when compared to the individually learned model

    A Transformer-Based Neural Machine Translation Model for Arabic Dialects That Utilizes Subword Units

    No full text
    Languages that allow free word order, such as Arabic dialects, are of significant difficulty for neural machine translation (NMT) because of many scarce words and the inefficiency of NMT systems to translate these words. Unknown Word (UNK) tokens represent the out-of-vocabulary words for the reason that NMT systems run with vocabulary that has fixed size. Scarce words are encoded completely as sequences of subword pieces employing the Word-Piece Model. This research paper introduces the first Transformer-based neural machine translation model for Arabic vernaculars that employs subword units. The proposed solution is based on the Transformer model that has been presented lately. The use of subword units and shared vocabulary within the Arabic dialect (the source language) and modern standard Arabic (the target language) enhances the behavior of the multi-head attention sublayers for the encoder by obtaining the overall dependencies between words of input sentence for Arabic vernacular. Experiments are carried out from Levantine Arabic vernacular (LEV) to modern standard Arabic (MSA) and Maghrebi Arabic vernacular (MAG) to MSA, Gulf–MSA, Nile–MSA, Iraqi Arabic (IRQ) to MSA translation tasks. Extensive experiments confirm that the suggested model adequately addresses the unknown word issue and boosts the quality of translation from Arabic vernaculars to Modern standard Arabic (MSA)

    A Reverse Positional Encoding Multi-Head Attention-Based Neural Machine Translation Model for Arabic Dialects

    No full text
    Languages with a grammatical structure that have a free order for words, such as Arabic dialects, are considered a challenge for neural machine translation (NMT) models because of the attached suffixes, affixes, and out-of-vocabulary words. This paper presents a new reverse positional encoding mechanism for a multi-head attention (MHA) neural machine translation (MT) model to translate from right-to-left texts such as Arabic dialects (ADs) to modern standard Arabic (MSA). The proposed model depends on an MHA mechanism that has been suggested recently. The utilization of the new reverse positional encoding (RPE) mechanism and the use of sub-word units as an input to the self-attention layer improve this sublayer for the proposed model’s encoder by capturing all dependencies between the words in right-to-left texts, such as AD input sentences. Experiments were conducted on Maghrebi Arabic to MSA, Levantine Arabic to MSA, Nile Basin Arabic to MSA, Gulf Arabic to MSA, and Iraqi Arabic to MSA. Experimental analysis proved that the proposed reverse positional encoding MHA NMT model was efficiently able to handle the open grammatical structure issue of Arabic dialect sentences, and the proposed RPE MHA NMT model enhanced the translation quality for right-to-left texts such as Arabic dialects

    Neural Network Prediction Model to Explore Complex Nonlinear Behavior in Dynamic Biological Network

    No full text
    Organism network systems provide a biological data with high complex level. Besides, these data reflect the complex activities in organisms that identifies nonlinear behavior as well. Hence, mathematical modelling methods such as Ordinary Differential Equations model (ODE's) are becoming significant tools to predict, and expose implied knowledge and data. Unfortunately, the aforementioned approaches face some of cons such as the scarcity and the vagueness in the biological knowledge to expect the protein concentrations measurements. So, the main object of this research presents a computational model such as a neural Feed Forward Network model using Back Propagation algorithm to engage with imprecise and missing biological knowledge to provide more insight about biological systems in organisms. Therefore, the model predicts protein concentration and illustrates the nonlinear behavior for the biological dynamic behavior in precise form. Also, the desired results are matched with recent ODE's model and it provides precise results in simpler form than ODEs
    corecore