2 research outputs found

    Performance Analysis of FD-NOMA-based Decentralized V2X Systems

    Get PDF
    In order to meet the requirements of massively connected devices, different quality of services (QoSs), various transmit rates and ultra-reliable and low latency communications (URLLC) in vehicle to everything (V2X) communications, we introduce a full duplex non-orthogonal multiple access (FD-NOMA)-based decentralized V2X system model. We then classify the V2X communications into two scenarios and give their exact capacity expressions. To solve the computation complicated problems of the involved exponential integral functions, we give the approximate closed-form expressions with arbitrary small errors. Numerical results indicate the validness of our derivations. Our analysis has that the accuracy of our approximate expressions is controlled by the division of π/2 in the urban and crowded scenario, and the truncation point T in the suburban and remote scenario. Numerical results manifest 1) Increasing the number of V2X device, NOMA power and Rician factor value yields better capacity performance. 2) Effect of FD-NOMA is determined by the FD self-interference and the channel noise. 3) FD-NOMA has better latency performance compared to other schemes

    Machine Learning Empowered Resource Allocation for NOMA Enabled IoT Networks

    Get PDF
    The Internet of things (IoT) is one of the main use cases of ultra massive machine type communications (umMTC), which aims to connect large-scale short packet sensors or devices in sixth-generation (6G) systems. This rapid increase in connected devices requires efficient utilization of limited spectrum resources. To this end, non-orthogonal multiple access (NOMA) is considered a promising solution due to its potential for massive connectivity over the same time/frequency resource block (RB). The IoT users’ have the characteristics of different features such as sporadic transmission, high battery life cycle, minimum data rate requirements, and different QoS requirements. Therefore, keeping in view these characteristics, it is necessary for IoT networks with NOMA to allocate resources more appropriately and efficiently. Moreover, due to the absence of 1) learning capabilities, 2) scalability, 3) low complexity, and 4) long-term resource optimization, conventional optimization approaches are not suitable for IoT networks with time-varying communication channels and dynamic network access. This thesis provides machine learning (ML) based resource allocation methods to optimize the long-term resources for IoT users according to their characteristics and dynamic environment. First, we design a tractable framework based on model-free reinforcement learning (RL) for downlink NOMA IoT networks to allocate resources dynamically. More specifically, we use actor critic deep reinforcement learning (ACDRL) to improve the sum rate of IoT users. This model can optimize the resource allocation for different users in a dynamic and multi-cell scenario. The state space in the proposed framework is based on the three-dimensional association among multiple IoT users, multiple base stations (BSs), and multiple sub-channels. In order to find the optimal resources solution for the maximization of sum rate problem in network and explore the dynamic environment better, this work utilizes the instantaneous data rate as a reward. The proposed ACDRL algorithm is scalable and handles different network loads. The proposed ACDRL-D and ACDRL-C algorithms outperform DRL and RL in terms of convergence speed and data rate by 23.5\% and 30.3\%, respectively. Additionally, the proposed scheme provides better sum rate as compare to orthogonal multiple access (OMA). Second, similar to sum rate maximization problem, energy efficiency (EE) is a key problem, especially for applications where battery replacement is costly or difficult to replace. For example, the sensors with different QoS requirements are deployed in radioactive areas, hidden in walls, and in pressurized pipes. Therefore, for such scenarios, energy cooperation schemes are required. To maximize the EE of different IoT users, i.e., grant-free (GF) and grant-based (GB) in the network with uplink NOMA, we propose an RL based semi-centralized optimization framework. In particular, this work applied proximal policy optimization (PPO) algorithm for GB users and to optimize the EE for GF users, a multi-agent deep Q-network where used with the aid of a relay node. Numerical results demonstrate that the suggested algorithm increases the EE of GB users compared to random and fixed power allocations methods. Moreover, results shows superiority in the EE of GF users over the benchmark scheme (convex optimization). Furthermore, we show that the increase in the number of GB users has a strong correlation with the EE of both types of users. Third, we develop an efficient model-free backscatter communication (BAC) approach with simultaneously downlink and uplink NOMA system to jointly optimize the transmit power of downlink IoT users and the reflection coefficient of uplink backscatter devices using a reinforcement learning algorithm, namely, soft actor critic (SAC). With the advantage of entropy regularization, the SAC agent learns to explore and exploit the dynamic BAC-NOMA network efficiently. Numerical results unveil the superiority of the proposed algorithm over the conventional optimization approach in terms of the average sum rate of uplink backscatter devices. We show that the network with multiple downlink users obtained a higher reward for a large number of iterations. Moreover, the proposed algorithm outperforms the benchmark scheme and BAC with OMA in terms of sum rate, self-interference coefficients, noise levels, QoS requirements, and cell radii
    corecore