17 research outputs found

    Dispatching-Rule Variants Algorithms for Used Spaces of Storage Supports

    No full text
    The paper is regarding the fair distribution of several files having different sizes to several storage supports. With the existence of several storage supports and different files, we search for a method that makes an appropriate backup. The appropriate backup guarantees a fair distribution of the big data (files). Fairness is related to the used spaces of storage support distribution. The problem is how to find a fair method that stores all files on the available storage supports, where each file is characterized by its size. We propose in this paper some fairness methods that seek to minimize the gap between used spaces of all storage supports. In this paper, several algorithms are developed to solve the proposed problem, and the experimental study shows the performance of these developed algorithms

    Ensemble Model for Network Intrusion Detection System Based on Bagging Using J48

    No full text
    Technology is rising on daily basis with the advancement in web and artificial intelligence (AI), and big data developed by machines in various industries. All of these provide a gateway for cybercrimes that makes network security a challenging task. There are too many challenges in the development of NID systems. Computer systems are becoming increasingly vulnerable to attack as a result of the rise in cybercrimes, the availability of vast amounts of data on the internet, and increased network connection. This is because creating a system with no vulnerability is not theoretically possible. In the previous studies, various approaches have been developed for the said issue each with its strengths and weaknesses. However, still there is a need for minimal variance and improved accuracy. To this end, this study proposes an ensemble model for the said issue. This model is based on Bagging with J48 Decision Tree. The proposed models outperform other employed models in terms of improving accuracy. The outcomes are assessed via accuracy, recall, precision, and f-measure. The overall average accuracy achieved by the proposed model is 83.73%

    An enhanced multilevel secure data dissemination approximate solution for future networks.

    No full text
    Sensitive data, such as financial, personal, or classified governmental information, must be protected throughout its cycle. This paper studies the problem of safeguarding transmitted data based on data categorization techniques. This research aims to use a novel routine as a new meta-heuristic to enhance a novel data categorization based-traffic classification technique where private data is classified into multiple confidential levels. As a result, two packets belonging to the same confidentiality level cannot be transmitted through two routers simultaneously, ensuring a high data protection level. Such a problem is determined by a non-deterministic polynomial-time hardness (NP-hard) problem; therefore, a scheduling algorithm is applied to minimize the total transmission time over the two considered routers. To measure the proposed scheme's performance, two types of distribution, uniform and binomial distributions used to generate packets transmission time datasets. The experimental result shows that the most efficient algorithm is the Best-Random Algorithm ([Formula: see text]), recording 0.028 s with an average gap of less than 0.001 in 95.1% of cases compared to all proposed algorithms. In addition, [Formula: see text] is compared to the best-proposed algorithm in the literature which is the Modified decreasing Estimated-Transmission Time algorithm (MDETA). The results show that [Formula: see text] is the best one in 100% of cases where MDETA reaches the best results in only 48%

    Scheduling algorithms for data-protection based on security-classification constraints to data-dissemination

    No full text
    Communication networks have played a vital role in changing people’s life. However, the rapid advancement in digital technologies has presented many drawbacks of the current inter-networking technology. Data leakages severely threaten information privacy and security and can jeopardize individual and public life. This research investigates the creation of a private network model that can decrease the number of data leakages. A two-router private network model is designed. This model uses two routers to manage the classification level of the transmitting network packets. In addition, various algorithmic techniques are proposed. These techniques solve a scheduling problem. This problem is to schedule packets through routers under a security classification level constraint. This constraint is the non-permission of the transmission of two packets that belongs to the same security classification level. These techniques are the dispatching rule and grouping method. The studied problem is an NP-hard. Eight algorithms are proposed to minimize the total transmission time. A comparison between the proposed algorithms and those in the literature is discussed to show the performance of the proposed scheme through experimentation. Four classes of instances are generated. For these classes, the experimental results show that the best-proposed algorithm is the best-classification groups’ algorithm in 89.1% of cases and an average gap of 0.001. In addition, a benchmark of instances is used based on a real dataset. This real dataset shows that the best-proposed algorithm is the best-classification groups’ algorithm in 88.6% of cases and an average gap of less than 0.001

    Fig 4 -

    No full text
    Sensitive data, such as financial, personal, or classified governmental information, must be protected throughout its cycle. This paper studies the problem of safeguarding transmitted data based on data categorization techniques. This research aims to use a novel routine as a new meta-heuristic to enhance a novel data categorization based-traffic classification technique where private data is classified into multiple confidential levels. As a result, two packets belonging to the same confidentiality level cannot be transmitted through two routers simultaneously, ensuring a high data protection level. Such a problem is determined by a non-deterministic polynomial-time hardness (NP-hard) problem; therefore, a scheduling algorithm is applied to minimize the total transmission time over the two considered routers. To measure the proposed scheme’s performance, two types of distribution, uniform and binomial distributions used to generate packets transmission time datasets. The experimental result shows that the most efficient algorithm is the Best-Random Algorithm (), recording 0.028 s with an average gap of less than 0.001 in 95.1% of cases compared to all proposed algorithms. In addition, is compared to the best-proposed algorithm in the literature which is the Modified decreasing Estimated-Transmission Time algorithm (MDETA). The results show that is the best one in 100% of cases where MDETA reaches the best results in only 48%.</div

    Fig 2 -

    No full text
    Sensitive data, such as financial, personal, or classified governmental information, must be protected throughout its cycle. This paper studies the problem of safeguarding transmitted data based on data categorization techniques. This research aims to use a novel routine as a new meta-heuristic to enhance a novel data categorization based-traffic classification technique where private data is classified into multiple confidential levels. As a result, two packets belonging to the same confidentiality level cannot be transmitted through two routers simultaneously, ensuring a high data protection level. Such a problem is determined by a non-deterministic polynomial-time hardness (NP-hard) problem; therefore, a scheduling algorithm is applied to minimize the total transmission time over the two considered routers. To measure the proposed scheme’s performance, two types of distribution, uniform and binomial distributions used to generate packets transmission time datasets. The experimental result shows that the most efficient algorithm is the Best-Random Algorithm (), recording 0.028 s with an average gap of less than 0.001 in 95.1% of cases compared to all proposed algorithms. In addition, is compared to the best-proposed algorithm in the literature which is the Modified decreasing Estimated-Transmission Time algorithm (MDETA). The results show that is the best one in 100% of cases where MDETA reaches the best results in only 48%.</div
    corecore