6,490 research outputs found

    Performance Improvement of AODV in Wireless Networks using Reinforcement Learning Algorithms

    Get PDF
    This paper investigates the application of reinforcement learning (RL) techniques to enhance the performance of the Ad hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad hoc networks (MANETs). MANETs are self-configuring networks consisting of mobile nodes that communicate without the need for a centralized infrastructure. AODV is a widely used routing protocol in MANETs due to its reactive nature, which reduces overhead and conserves energy. This research explores three popular Reinforcement Learning algorithms: SARSA, Q-Learning and Deep Q-Network (DQN) to optimize the AODV protocol's routing decisions. The RL agents are trained to learn the optimal routing paths by interacting with the network environment, considering factors such as link quality, node mobility, and traffic load. The experiments are conducted using network simulators to evaluate the performance improvements achieved by the proposed RL-based enhancements. The results demonstrate significant enhancements in various performance metrics, including reduced end-to-end delay, increased packet delivery ratio, and improved throughput. Furthermore, the RL-based approaches exhibit adaptability to dynamic network conditions, ensuring efficient routing even in highly mobile and unpredictable MANET scenarios. This study offers valuable insights into harnessing RL techniques for improving the efficiency and reliability of routing protocols in mobile ad hoc networks

    Mobilized ad-hoc networks: A reinforcement learning approach

    Get PDF
    Research in mobile ad-hoc networks has focused on situations in which nodes have no control over their movements. We investigate an important but overlooked domain in which nodes do have control over their movements. Reinforcement learning methods can be used to control both packet routing decisions and node mobility, dramatically improving the connectivity of the network. We first motivate the problem by presenting theoretical bounds for the connectivity improvement of partially mobile networks and then present superior empirical results under a variety of different scenarios in which the mobile nodes in our ad-hoc network are embedded with adaptive routing policies and learned movement policies

    Reinforcing Reachable Routes

    Get PDF
    This paper studies the evaluation of routing algorithms from the perspective of reachability routing, where the goal is to determine all paths between a sender and a receiver. Reachability routing is becoming relevant with the changing dynamics of the Internet and the emergence of low-bandwidth wireless/ad-hoc networks. We make the case for reinforcement learning as the framework of choice to realize reachability routing, within the confines of the current Internet infrastructure. The setting of the reinforcement learning problem offers several advantages,including loop resolution, multi-path forwarding capability, cost-sensitive routing, and minimizing state overhead, while maintaining the incremental spirit of current backbone routing algorithms. We identify research issues in reinforcement learning applied to the reachability routing problem to achieve a fluid and robust backbone routing framework. This paper also presents the design, implementation and evaluation of a new reachability routing algorithm that uses a model-based approach to achieve cost-sensitive multi-path forwarding; performance assessment of the algorithm in various troublesome topologies shows consistently superior performance over classical reinforcement learning algorithms. The paper is targeted toward practitioners seeking to implement a reachability routing algorithm

    A Novel Cryptography-Based Multipath Routing Protocol for Wireless Communications

    Get PDF
    Communication in a heterogeneous, dynamic, low-power, and lossy network is dependable and seamless thanks to Mobile Ad-hoc Networks (MANETs). Low power and Lossy Networks (LLN) Routing Protocol (RPL) has been designed to make MANET routing more efficient. For different types of traffic, RPL routing can experience problems with packet transmission rates and latency. RPL is an optimal routing protocol for low power lossy networks (LLN) having the capacity to establish a path between resource constraints nodes by using standard objective functions: OF0 and MRHOF. The standard objective functions lead to a decrease in the network lifetime due to increasing the computations for establishing routing between nodes in the heterogeneous network (LLN) due to poor decision problems. Currently, conventional Mobile Ad-hoc Network (MANET) is subjected to different security issues. Weathering those storms would help if you struck a good speed-memory-storage equilibrium. This article presents a security algorithm for MANET networks that employ the Rapid Packet Loss (RPL) routing protocol. The constructed network uses optimization-based deep learning reinforcement learning for MANET route creation. An improved network security algorithm is applied after a route has been set up using (ClonQlearn). The suggested method relies on a lightweight encryption scheme that can be used for both encryption and decryption. The suggested security method uses Elliptic-curve cryptography (ClonQlearn+ECC) for a random key generation based on reinforcement learning (ClonQlearn). The simulation study showed that the proposed ClonQlearn+ECC method improved network performance over the status quo. Secure data transmission is demonstrated by the proposed ClonQlearn + ECC, which also improves network speed. The proposed ClonQlearn + ECC increased network efficiency by 8-10% in terms of packet delivery ratio, 7-13% in terms of throughput, 5-10% in terms of end-to-end delay, and 3-7% in terms of power usage variation

    Quality of Service Issues for Reinforcement Learning Based Routing Algorithm for Ad-Hoc Networks

    Get PDF
    Mobile ad-hoc networks are dynamic networks which are decentralized and autonomous in nature. Many routing algorithms have been proposed for these dynamic networks. It is an important problem to model Quality of Service requirements on these types of algorithms which traditionally have certain limitations. To model this scenario we have considered a reinforcement learning algorithm SAMPLE. SAMPLE promises to deal effectively with congestion and under high traffic load. As it is natural for ad-hoc networks to move in groups, we have considered the various group mobility models. The Pursue Mobility Model with its superiormobilitymetrics exhibits better performance. At the data link layer we have considered IEEE 802.11e, a MAC layer which has provisions to support QoS. As mobile ad-hoc networks are constrained by resources like energy and bandwidth, it is imperative for them to cooperate in a reasonably selfish manner. Thus, in this paper we propose cooperation with a moderately punishing algorithm based on game theory. The proposed algorithm in synchronization with SAMPLE yields better results on IEEE 802.11e

    Reinforcement Learning for Routing in Cognitive Radio Ad Hoc Networks

    Get PDF
    Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs’ network performance without significantly jeopardizing PUs’ network performance, specifically SUs’ interference to PUs
    • …
    corecore