3 research outputs found

    Resource Allocation in Uplink NOMA-IoT Networks: A Reinforcement-Learning Approach

    Get PDF
    Non-orthogonal multiple access (NOMA) exploits the potential of the power domain to enhance the connectivity for the Internet of Things (IoT). Due to time-varying communication channels, dynamic user clustering is a promising method to increase the throughput of NOMA-IoT networks. This paper develops an intelligent resource allocation scheme for uplink NOMA-IoT communications. To maximise the average performance of sum rates, this work designs an efficient optimization approach based on two reinforcement learning algorithms, namely deep reinforcement learning (DRL) and SARSA-learning. For light traffic, SARSA-learning is used to explore the safest resource allocation policy with low cost. For heavy traffic, DRL is used to handle traffic-introduced huge variables. With the aid of the considered approach, this work addresses two main problems of fair resource allocation in NOMA techniques: 1) allocating users dynamically and 2) balancing resource blocks and network traffic. We analytically demonstrate that the rate of convergence is inversely proportional to network sizes. Numerical results show that: 1) Compared with the optimal benchmark scheme, the proposed DRL and SARSA-learning algorithms have lower complexity with acceptable accuracy and 2) NOMA-enabled IoT networks outperform the conventional orthogonal multiple access based IoT networks in terms of system throughput

    A fuzzy-logic based adaptive data rate scheme for energy-efficient LoRaWAN communication

    Get PDF
    Long RangeWide Area Network (LoRaWAN) technology is rapidly expanding as a technology with long distance connectivity, low power consumption, low data rates and a large number of end devices (EDs) that connect to the Internet of Things (IoT) network. Due to the heterogeneity of several applications with varying Quality of Service (QoS) requirements, energy is expended as the EDs communicate with applications. The LoRaWAN Adaptive Data Rate (ADR) manages the resource allocation to optimize energy efficiency. The performance of the ADR algorithm gradually deteriorates in dense networks and efforts have been made in various studies to improve the algorithm’s performance. In this paper, we propose a fuzzy-logic based adaptive data rate (FL-ADR) scheme for energy efficient LoRaWAN communication. The scheme is implemented on the network server (NS), which receives sensor data from the EDs via the gateway (GW) node and computes network parameters (such as the spreading factor and transmission power) to optimize the energy consumption of the EDs in the network. The performance of the algorithm is evaluated in ns-3 using a multi-gateway LoRa network with EDs sending data packets at various intervals. Our simulation results are analyzed and compared to the traditional ADR and the ns-3 ADR. The proposed FL-ADR outperforms the traditional ADR algorithm and the ns-3 ADR minimizing the interference rate and energy consumption.In part by TelkomSA.https://www.mdpi.com/journal/jsanam2023Electrical, Electronic and Computer Engineerin

    Resource Allocation in Wireless Powered IoT Networks

    No full text
    In this paper, efficient resource allocation for the uplink transmission of wireless powered IoT networks is investigated. We adopt LoRa technology as an example in the IoT network, but this work is still suitable for other communication technologies. Allocating limited resources, like spectrum and energy resources, among a massive number of users faces critical challenges. We consider grouping wireless powered IoT users into available channels first and then investigate power allocation for users grouped in the same channel to improve the network throughput. Specifically, the user grouping problem is formulated as a many to one matching game. It is achieved by considering IoT users and channels as selfish players which belong to two disjoint sets. Both selfish players focus on maximizing their own utilities. Then we propose an efficient channel allocation algorithm (ECAA) with low complexity for user grouping. Additionally, a Markov Decision Process (MDP) is used to model unpredictable energy arrival and channel conditions uncertainty at each user, and a power allocation algorithm is proposed to maximize the accumulative network throughput over a finite-horizon of time slots. By doing so, we can distribute the channel access and dynamic power allocation local to IoT users. Numerical results demonstrate that our proposed ECAA algorithm achieves near-optimal performance and is superior to random channel assignment, but has much lower computational complexity. Moreover, simulations show that the distributed power allocation policy for each user is obtained with better performance than a centralized offline scheme.Comment: 11 pages, 8 figures the accepted versio
    corecore