44,821 research outputs found

    Reinforced Intrusion Detection Using Pursuit Reinforcement Competitive Learning

    Get PDF
    Today, information technology is growing rapidly,all information can be obtainedmuch easier. It raises some new problems; one of them is unauthorized access to the system. We need a reliable network security system that is resistant to a variety of attacks against the system. Therefore, Intrusion Detection System (IDS) required to overcome the problems of intrusions. Many researches have been done on intrusion detection using classification methods. Classification methodshave high precision, but it takes efforts to determine an appropriate classification model to the classification problem. In this paper, we propose a new reinforced approach to detect intrusion with On-line Clustering using Reinforcement Learning. Reinforcement Learning is a new paradigm in machine learning which involves interaction with the environment.It works with reward and punishment mechanism to achieve solution. We apply the Reinforcement Learning to the intrusion detection problem with considering competitive learning using Pursuit Reinforcement Competitive Learning (PRCL). Based on the experimental result, PRCL can detect intrusions in real time with high accuracy (99.816% for DoS, 95.015% for Probe, 94.731% for R2L and 99.373% for U2R) and high speed (44 ms).The proposed approach can help network administrators to detect intrusion, so the computer network security systembecome reliable.Keywords: Intrusion Detection System, On-Line Clustering, Reinforcement Learning, Unsupervised Learning

    Dynamic distributed clustering in wireless sensor networks via Voronoi tessellation control

    Get PDF
    This paper presents two dynamic and distributed clustering algorithms for Wireless Sensor Networks (WSNs). Clustering approaches are used in WSNs to improve the network lifetime and scalability by balancing the workload among the clusters. Each cluster is managed by a cluster head (CH) node. The first algorithm requires the CH nodes to be mobile: by dynamically varying the CH node positions, the algorithm is proved to converge to a specific partition of the mission area, the generalised Voronoi tessellation, in which the loads of the CH nodes are balanced. Conversely, if the CH nodes are fixed, a weighted Voronoi clustering approach is proposed with the same load-balancing objective: a reinforcement learning approach is used to dynamically vary the mission space partition by controlling the weights of the Voronoi regions. Numerical simulations are provided to validate the approaches

    Scaling Ant Colony Optimization with Hierarchical Reinforcement Learning Partitioning

    Get PDF
    This paper merges hierarchical reinforcement learning (HRL) with ant colony optimization (ACO) to produce a HRL ACO algorithm capable of generating solutions for large domains. This paper describes two specific implementations of the new algorithm: the first a modification to Dietterich’s MAXQ-Q HRL algorithm, the second a hierarchical ant colony system algorithm. These implementations generate faster results, with little to no significant change in the quality of solutions for the tested problem domains. The application of ACO to the MAXQ-Q algorithm replaces the reinforcement learning, Q-learning, with the modified ant colony optimization method, Ant-Q. This algorithm, MAXQ-AntQ, converges to solutions not significantly different from MAXQ-Q in 88% of the time. This paper then transfers HRL techniques to the ACO domain and traveling salesman problem (TSP). To apply HRL to ACO, a hierarchy must be created for the TSP. A data clustering algorithm creates these subtasks, with an ACO algorithm to solve the individual and complete problems. This paper tests two clustering algorithms, k-means and G-means. The results demonstrate the algorithm with data clustering produces solutions 20 times faster with 5-10% decrease in solution quality due to the effects of clustering

    Content-Aware User Clustering and Caching in Wireless Small Cell Networks

    Full text link
    In this paper, the problem of content-aware user clustering and content caching in wireless small cell networks is studied. In particular, a service delay minimization problem is formulated, aiming at optimally caching contents at the small cell base stations (SCBSs). To solve the optimization problem, we decouple it into two interrelated subproblems. First, a clustering algorithm is proposed grouping users with similar content popularity to associate similar users to the same SCBS, when possible. Second, a reinforcement learning algorithm is proposed to enable each SCBS to learn the popularity distribution of contents requested by its group of users and optimize its caching strategy accordingly. Simulation results show that by correlating the different popularity patterns of different users, the proposed scheme is able to minimize the service delay by 42% and 27%, while achieving a higher offloading gain of up to 280% and 90%, respectively, compared to random caching and unclustered learning schemes.Comment: In the IEEE 11th International Symposium on Wireless Communication Systems (ISWCS) 201
    • …
    corecore