9 research outputs found

    Sequential Evolutionary Operations of Trigonometric Simplex Designs for High-Dimensional Unconstrained Optimization Applications

    Get PDF
    This dissertation proposes a novel mathematical model for the Amoeba or the Nelder-Mead simplex optimization (NM) algorithm. The proposed Hassan NM (HNM) algorithm allows components of the reflected vertex to adapt to different operations, by breaking down the complex structure of the simplex into multiple triangular simplexes that work sequentially to optimize the individual components of mathematical functions. When the next formed simplex is characterized by different operations, it gives the simplex similar reflections to that of the NM algorithm, but with rotation through an angle determined by the collection of nonisometric features. As a consequence, the generating sequence of triangular simplexes is guaranteed that not only they have different shapes, but also they have different directions, to search the complex landscape of mathematical problems and to perform better performance than the traditional hyperplanes simplex. To test reliability, efficiency, and robustness, the proposed algorithm is examined on three areas of large-scale optimization categories: systems of nonlinear equations, nonlinear least squares, and unconstrained minimization. The experimental results confirmed that the new algorithm delivered better performance than the traditional NM algorithm, represented by a famous Matlab function, known as fminsearch. In addition, the new trigonometric simplex design provides a platform for further development of reliable and robust sparse autoencoder software (SAE) for intrusion detection system (IDS) applications. The proposed error function for the SAE is designed to make a trade-off between the latent state representation for more mature features and network regularization by applying the sparsity constraint in the output layer of the proposed SAE network. In addition, the hyperparameters of the SAE are tuned based on the HNM algorithm and were proved to give a better capability of extracting features in comparison with the existing developed algorithms. In fact, the proposed SAE can be used for not only network intrusion detection systems, but also other applications pertaining to deep learning, feature extraction, and pattern analysis. Results from experimental tests showed that the different layers of the enhanced SAE could efficiently adapt to various levels of learning hierarchy. Finally, additional tests demonstrated that the proposed IDS architecture could provide a more compact and effective immunity system for different types of network attacks with a significant detection accuracy of 99.63% and an F-measure of 0.996, on average, when penalizing sparsity constraint directly on the synaptic weights within the network

    An Enhanced Design of Sparse Autoencoder for Latent Features Extraction Based on Trigonometric Simplexes for Network Intrusion Detection Systems

    Get PDF
    Despite the successful contributions in the field of network intrusion detection using machine learning algorithms and deep networks to learn the boundaries between normal traffic and network attacks, it is still challenging to detect various attacks with high performance. In this paper, we propose a novel mathematical model for further development of robust, reliable, and efficient software for practical intrusion detection applications. In this present work, we are concerned with optimal hyperparameters tuned for high performance sparse autoencoders for optimizing features and classifying normal and abnormal traffic patterns. The proposed framework allows the parameters of the back-propagation learning algorithm to be tuned with respect to the performance and architecture of the sparse autoencoder through a sequence of trigonometric simplex designs. These hyperparameters include the number of nodes in the hidden layer, learning rate of the hidden layer, and learning rate of the output layer. It is expected to achieve better results in extracting features and adapting to various levels of learning hierarchy as different layers of the autoencoder are characterized by different learning rates in the proposed framework. The idea is viewed such that every learning rate of a hidden layer is a dimension in a multidimensional space. Hence, a vector of the adaptive learning rates is implemented for the multiple layers of the network to accelerate the processing time that is required for the network to learn the mapping towards a combination of enhanced features and the optimal synaptic weights in the multiple layers for a given problem. The suggested framework is tested on CICIDS2017, a reliable intrusion detection dataset that covers all the common, updated intrusions and cyber-attacks. Experimental results demonstrate that the proposed architecture for intrusion detection yields superior performance compared to recently published algorithms in terms of classification accuracy and F-measure results.https://doi.org/10.3390/electronics902025

    A Dynamic Clustering Algorithm for Object Tracking and Localization in WSN

    Get PDF
    A Wireless Sensor Network (WSN) is an assemblage of cooperative sensor nodes acting together into an environment to monitor an event of interest. However, one of the most limiting factors is the energy constrain for each node; therefore, it is a trade-off is required for that factor in designing of a network, while reporting, tracking or visualizing an event to be considered. In this paper, two object tracking techniques used in Wireless Sensor Networks based on cluster algorithms have been combined together to perform many functions in the proposed algorithm. The benefit of using clusters algorithms can be count as the detection node in a cluster reports an event to the Cluster Head (CH) node according to a query, and then the CH sends all the collected information to the sink or the base station. This way reduces energy consuming and required communication bandwidth. Furthermore, the algorithm is highly scalable while it prolongs the life time of the network

    Machine Learning Approaches for Flow-Based Intrusion Detection Systems

    Get PDF
    In cybersecurity, machine/deep learning approaches can predict and detect threats before they result in major security incidents. The design and performance of an effective machine learning (ML) based Intrusion Detection System (IDS) depends upon the selected attributes and the classifier. This project considers multi-class classification for the Aegean Wi-Fi Intrusion Dataset (AWID) where classes represent 17 types of the IEEE 802.11 MAC Layer attacks. The proposed work extracts four attribute sets of 32, 10, 7 and 5 attributes, respectfully. The classifiers achieved high accuracy with minimum false positive rates, and the presented work outperforms previous related work in terms of number of classes, attributes and accuracy. The proposed work achieved maximum accuracy of 99.64% for Random Forest with supply test and 99.99% using the 10-fold cross validation approach for Random Forest and J48

    Towards Efficient Features Dimensionality Reduction for Network Intrusion Detection on Highly Imbalanced Traffic

    Get PDF
    The performance of an IDS is significantly improved when the features are more discriminative and representative. This research effort is able to reduce the CICIDS2017 dataset’s feature dimensions from 81 to 10, while maintaining a high accuracy of 99.6% in multi-class and binary classification. Furthermore, we propose a Multi-Class Combined performance metric CombinedMc with respect to class distribution to compare various multi-class and binary classification systems through incorporating FAR, DR, Accuracy, and class distribution parameters. In addition, we developed a uniform distribution based balancing approach to handle the imbalanced distribution of the minority class instances in the CICIDS 2017 network intrusion dataset

    Features Dimensionality Reduction Approaches for Machine Learning Based Network Intrusion Detection

    Get PDF
    The security of networked systems has become a critical universal issue that influences individuals, enterprises and governments. The rate of attacks against networked systems has increased dramatically, and the tactics used by the attackers are continuing to evolve. Intrusion detection is one of the solutions against these attacks. A common and effective approach for designing Intrusion Detection Systems (IDS) is Machine Learning. The performance of an IDS is significantly improved when the features are more discriminative and representative. This study uses two feature dimensionality reduction approaches: (i) Auto-Encoder (AE): an instance of deep learning, for dimensionality reduction, and (ii) Principle Component Analysis (PCA). The resulting low-dimensional features from both techniques are then used to build various classifiers such as Random Forest (RF), Bayesian Network, Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) for designing an IDS. The experimental findings with low-dimensional features in binary and multi-class classification show better performance in terms of Detection Rate (DR), F-Measure, False Alarm Rate (FAR), and Accuracy. This research effort is able to reduce the CICIDS2017 dataset’s feature dimensions from 81 to 10, while maintaining a high accuracy of 99.6% in multi-class and binary classification. Furthermore, in this paper, we propose a Multi-Class Combined performance metric CombinedMc with respect to class distribution to compare various multi-class and binary classification systems through incorporating FAR, DR, Accuracy, and class distribution parameters. In addition, we developed a uniform distribution based balancing approach to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset.http://dx.doi.org/10.3390/electronics803032

    Machine Learning Based Feature Reduction for Network Intrusion Detection

    Get PDF
    The security of networked systems has become a critical universal issue. The rate of attacks against networked systems has increased dramatically, and the tactics used by the attackers are continuing to evolve. Intrusion detection is one of the solutions against these attacks. A common and effective approach for designing Intrusion Detection Systems (IDS) is Machine Learning. The performance of an IDS is significantly improved when the features are more discriminative and representative. This study uses two feature dimensionality reduction approaches: i) Auto-Encoder (AE): an instance of deep learning, for dimensionality reduction, and ii) Principle Component Analysis (PCA). The resulting low-dimensional features from both techniques are then used to build various classifiers such as Random Forest (RF), Bayesian Network, Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) for designing an IDS. The experimental findings with low-dimensional features in binary and multi-class classification show better performance in terms of Detection Rate (DR), F-Measure, False Alarm Rate (FAR), and Accuracy. This research effort is able to reduce the CICIDS2017 dataset's feature dimensions from 81 to 10, while maintaining a high accuracy of 99.6%. Furthermore, we propose a Multi-Class Combined performance metric CombinedMc with respect to class distribution to compare various multi-class and binary classification systems through incorporating FAR, DR, Accuracy, and class distribution parameters. In addition, we developed a uniform distribution based balancing approach to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset

    An Enhanced Design of Sparse Autoencoder for Latent Features Extraction Based on Trigonometric Simplexes for Network Intrusion Detection Systems

    No full text
    Despite the successful contributions in the field of network intrusion detection using machine learning algorithms and deep networks to learn the boundaries between normal traffic and network attacks, it is still challenging to detect various attacks with high performance. In this paper, we propose a novel mathematical model for further development of robust, reliable, and efficient software for practical intrusion detection applications. In this present work, we are concerned with optimal hyperparameters tuned for high performance sparse autoencoders for optimizing features and classifying normal and abnormal traffic patterns. The proposed framework allows the parameters of the back-propagation learning algorithm to be tuned with respect to the performance and architecture of the sparse autoencoder through a sequence of trigonometric simplex designs. These hyperparameters include the number of nodes in the hidden layer, learning rate of the hidden layer, and learning rate of the output layer. It is expected to achieve better results in extracting features and adapting to various levels of learning hierarchy as different layers of the autoencoder are characterized by different learning rates in the proposed framework. The idea is viewed such that every learning rate of a hidden layer is a dimension in a multidimensional space. Hence, a vector of the adaptive learning rates is implemented for the multiple layers of the network to accelerate the processing time that is required for the network to learn the mapping towards a combination of enhanced features and the optimal synaptic weights in the multiple layers for a given problem. The suggested framework is tested on CICIDS2017, a reliable intrusion detection dataset that covers all the common, updated intrusions and cyber-attacks. Experimental results demonstrate that the proposed architecture for intrusion detection yields superior performance compared to recently published algorithms in terms of classification accuracy and F-measure results

    Incorporating Derivative-Free Convexity with Trigonometric Simplex Designs for Learning-Rate Estimation of Stochastic Gradient-Descent Method

    No full text
    This paper proposes a novel mathematical theory of adaptation to convexity of loss functions based on the definition of the condense-discrete convexity (CDC) method. The developed theory is considered to be of immense value to stochastic settings and is used for developing the well-known stochastic gradient-descent (SGD) method. The successful contribution of change of the convexity definition impacts the exploration of the learning-rate scheduler used in the SGD method and therefore impacts the convergence rate of the solution that is used for measuring the effectiveness of deep networks. In our development of methodology, the convexity method CDC and learning rate are directly related to each other through the difference operator. In addition, we have incorporated the developed theory of adaptation with trigonometric simplex (TS) designs to explore different learning rate schedules for the weight and bias parameters within the network. Experiments confirm that by using the new definition of convexity to explore learning rate schedules, the optimization is more effective in practice and has a strong effect on the training of the deep neural network
    corecore