18 research outputs found

    Performance Improvement of AODV in Wireless Networks using Reinforcement Learning Algorithms

    Get PDF
    This paper investigates the application of reinforcement learning (RL) techniques to enhance the performance of the Ad hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad hoc networks (MANETs). MANETs are self-configuring networks consisting of mobile nodes that communicate without the need for a centralized infrastructure. AODV is a widely used routing protocol in MANETs due to its reactive nature, which reduces overhead and conserves energy. This research explores three popular Reinforcement Learning algorithms: SARSA, Q-Learning and Deep Q-Network (DQN) to optimize the AODV protocol's routing decisions. The RL agents are trained to learn the optimal routing paths by interacting with the network environment, considering factors such as link quality, node mobility, and traffic load. The experiments are conducted using network simulators to evaluate the performance improvements achieved by the proposed RL-based enhancements. The results demonstrate significant enhancements in various performance metrics, including reduced end-to-end delay, increased packet delivery ratio, and improved throughput. Furthermore, the RL-based approaches exhibit adaptability to dynamic network conditions, ensuring efficient routing even in highly mobile and unpredictable MANET scenarios. This study offers valuable insights into harnessing RL techniques for improving the efficiency and reliability of routing protocols in mobile ad hoc networks

    Design an Improved Trust-based Quality of Service AwareRouting in Cognitive Mobile Ad-Hoc Network

    Get PDF
    Mobile ad hoc networks (MANETs) are wireless networks that can be configured at will. It has no infrastructure and centralized control, so it is only suitable for provisional communications. In a dynamically topological and resource-constrained network, ensuring QoS and security is challenging. MANETs are dynamic networks, so navigating them can be challenging and more susceptible to attacks. MANET requires significant memory, speed, and transmission bandwidth for conventional security measures like cryptographic techniques. Consequently, these methods are unsuitable for identifying malicious behaviour or self-centered nodes. Nodes that are malicious, selfish, or malfunctioning can be identified based on the trust method, which calculates how much trust exists between them. A trust-based QOS-aware routing protocol is proposed in this paper to calculate trust in MANET (I-TQAR). The tree important performance metrics are considered for result validation such as delay, throughput and packet delivery ratio (PDR). I-TQAR offers significantly improved performance in all areas compared to the existing TQR and TQOR protocols

    Developing Intelligent Routing Algorithm over SDN: Reusable Reinforcement Learning Approach

    Get PDF
    Traffic routing is vital for the proper functioning of the Internet. As users and network traffic increase, researchers try to develop adaptive and intelligent routing algorithms that can fulfill various QoS requirements. Reinforcement Learning (RL) based routing algorithms have shown better performance than traditional approaches. We developed a QoS-aware, reusable RL routing algorithm, RLSR-Routing over SDN. During the learning process, our algorithm ensures loop-free path exploration. While finding the path for one traffic demand (a source destination pair with certain amount of traffic), RLSR-Routing learns the overall network QoS status, which can be used to speed up algorithm convergence when finding the path for other traffic demands. By adapting Segment Routing, our algorithm can achieve flow-based, source packet routing, and reduce communications required between SDN controller and network plane. Our algorithm shows better performance in terms of load balancing than the traditional approaches. It also has faster convergence than the non-reusable RL approach when finding paths for multiple traffic demands

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Approximating optimal Broadcast in Wireless Mesh Networks with Machine Learning

    Get PDF
    With the growth of IoT, efficient broadcast is required for many applications. Yet, current protocols use primitive mechanisms based on heuristics. Multi-agent reinforcement learning is applied to approximate optimal broadcast in Wireless Mesh Networks. One of the proposed fully distributed algorithms, using Bayesian Neural Networks, outperforms MORE multicast and BATMAN, improving airtime up to 20%, e2e delay up to 30%, and satisfying timeout constraints in over the 97% of the cases

    Recent Advances in Cellular D2D Communications

    Get PDF
    Device-to-device (D2D) communications have attracted a great deal of attention from researchers in recent years. It is a promising technique for offloading local traffic from cellular base stations by allowing local devices, in physical proximity, to communicate directly with each other. Furthermore, through relaying, D2D is also a promising approach to enhancing service coverage at cell edges or in black spots. However, there are many challenges to realizing the full benefits of D2D. For one, minimizing the interference between legacy cellular and D2D users operating in underlay mode is still an active research issue. With the 5th generation (5G) communication systems expected to be the main data carrier for the Internet-of-Things (IoT) paradigm, the potential role of D2D and its scalability to support massive IoT devices and their machine-centric (as opposed to human-centric) communications need to be investigated. New challenges have also arisen from new enabling technologies for D2D communications, such as non-orthogonal multiple access (NOMA) and blockchain technologies, which call for new solutions to be proposed. This edited book presents a collection of ten chapters, including one review and nine original research works on addressing many of the aforementioned challenges and beyond

    Resource Allocation in Next Generation Mobile Networks

    Get PDF
    The increasing heterogeneity of the mobile network infrastructure together with the explosively growing demand for bandwidth-hungry services with diverse quality of service (QoS) requirements leads to a degradation in the performance of traditional networks. To address this issue in next-generation mobile networks (NGMN), various technologies such as software-defined networking (SDN), network function virtualization (NFV), mobile edge/cloud computing (MEC/MCC), non-terrestrial networks (NTN), and edge ML are essential. Towards this direction, an optimal allocation and management of heterogeneous network resources to achieve the required low latency, energy efficiency, high reliability, enhanced coverage and connectivity, etc. is a key challenge to be solved urgently. In this dissertation, we address four critical and challenging resource allocation problems in NGMN and propose efficient solutions to tackle them. In the first part, we address the network slice resource provisioning problem in NGMN for delivering a wide range of services promised by 5G systems and beyond, including enhanced mobile broadband (eMBB), ultra-reliable and low latency (URLLC), and massive machine-type communication (mMTC). Network slicing is one of the major solutions needed to meet the differentiated service requirements of NGMN, under one common network infrastructure. Towards robust mobile network slicing, we propose a novel approach for the end-to-end (E2E) resource allocation in a realistic scenario with uncertainty in slices' demands using stochastic programming. The effectiveness of our proposed methodology is validated through simulations. Despite the significant benefits that network slicing has demonstrated to bring to the management and performance of NGMN, the real-time response required by many emerging delay-sensitive applications, such as autonomous driving, remote health, and smart manufacturing, necessitates the integration of multi-access edge computing (MEC) into network sliding for 5G networks and beyond. To this end, we discuss a novel collaborative cloud-edge-local computation offloading scheme in the next two parts of this dissertation. The first part studies the problem from the perspective of the infrastructure provider and shows the effectiveness of the proposed approach in addressing the rising number of latency-sensitive services and improving energy efficiency which has become a primary concern in NGMN. Moreover, taking into account the perspective of application (higher layer), we propose a novel framework for the optimal reservation of resources by applications, resulting in significant resource savings and reduced cost. The proposed method utilizes application-specific resource coupling relationships modeled using linear regression analysis. We further improve this approach by using Reinforcement Learning to automatically derive resource coupling functions in dynamic environments. Enhanced connectivity and coverage are other key objectives of NGMN. In this regard, unmanned aerial vehicles (UAVs) have been extensively utilized to provide wireless connectivity in rural and under-developed areas, enhance network capacity, and provide support for peaks or unexpected surges in user demand. The popularity of UAVs in such scenarios is mainly owing to their fast deployment, cost-efficiency, and superior communication performance resulting from line-of-sight (LoS)-dominated wireless channels. In the fifth part of this dissertation, we formulate the problem of aerial platform resource allocation and traffic routing in multi-UAV relaying systems wherein UAVs are deployed as flying base stations. Our proposed solution is shown to improve the supported traffic with minimum deployment cost. Moreover, the new breed of intelligent devices and applications such as UAVs, AR/VR, remote health, autonomous vehicles, etc. requires a novel paradigm shift from traditional cloud-based learning to a distributed, low-latency, and reliable ML at the network edge. To this end, Federated Learning (FL) has been proposed as a new learning scheme that enables devices to collaboratively learn a shared model while keeping the training data locally. However, the performance of FL is significantly affected by various security threats such as data and model poisoning attacks. Towards reliable edge learning, in the last part of this dissertation, we propose trust as a metric to measure the trustworthiness of the FL agents and thereby enhance the reliability of FL

    Link Scheduling in UAV-Aided Networks

    Get PDF
    Unmanned Aerial Vehicles (UAVs) or drones are a type of low altitude aerial mobile vehicles. They can be integrated into existing networks; e.g., cellular, Internet of Things (IoT) and satellite networks. Moreover, they can leverage existing cellular or Wi-Fi infrastructures to communicate with one another. A popular application of UAVs is to deploy them as mobile base stations and/or relays to assist terrestrial wireless communications. Another application is data collection, whereby they act as mobile sinks for wireless sensor networks or sensor devices operating in IoT networks. Advantageously, UAVs are cost-effective and they are able to establish line-of-sight links, which help improve data rate. A key concern, however, is that the uplink communications to a UAV may be limited, where it is only able to receive from one device at a time. Further, ground devices, such as those in IoT networks, may have limited energy, which limit their transmit power. To this end, there are three promising approaches to address these concerns, including (i) trajectory optimization, (ii) link scheduling, and (iii) equipping UAVs with a Successive Interference Cancellation (SIC) radio. Henceforth, this thesis considers data collection in UAV-aided, TDMA and SICequipped wireless networks. Its main aim is to develop novel link schedulers to schedule uplink communications to a SIC-capable UAV. In particular, it considers two types of networks: (i) one-tier UAV communications networks, where a SIC-enabled rotary-wing UAV collects data from multiple ground devices, and (ii) Space-Air-Ground Integrated Networks (SAGINs), where a SIC-enabled rotary-wing UAV offloads collected data from ground devices to a swarm of CubeSats. A CubeSat then downloads its data to a terrestrial gateway. Compared to one-tier UAV communications networks, SAGINs are able to provide wide coverage and seamless connectivity to ground devices in remote and/or sparsely populated areas
    corecore