2 research outputs found
Joint QoS-Aware Scheduling and Precoding for Massive MIMO Systems via Deep Reinforcement Learning
The rapid development of mobile networks proliferates the demands of high
data rate, low latency, and high-reliability applications for the
fifth-generation (5G) and beyond (B5G) mobile networks. Concurrently, the
massive multiple-input-multiple-output (MIMO) technology is essential to
realize the vision and requires coordination with resource management functions
for high user experiences. Though conventional cross-layer adaptation
algorithms have been developed to schedule and allocate network resources, the
complexity of resulting rules is high with diverse quality of service (QoS)
requirements and B5G features. In this work, we consider a joint user
scheduling, antenna allocation, and precoding problem in a massive MIMO system.
Instead of directly assigning resources, such as the number of antennas, the
allocation process is transformed into a deep reinforcement learning (DRL)
based dynamic algorithm selection problem for efficient Markov decision process
(MDP) modeling and policy training. Specifically, the proposed utility function
integrates QoS requirements and constraints toward a long-term system-wide
objective that matches the MDP return. The componentized action structure with
action embedding further incorporates the resource management process into the
model. Simulations show 7.2% and 12.5% more satisfied users against static
algorithm selection and related works under demanding scenarios
Twin Delayed DDPG based Dynamic Power Allocation for Mobility in IoRT
The internet of robotic things (IoRT) is a modern as well as fast-evolving technology employed in abundant socio-economical aspects which connect user equipment (UE) for communication and data transfer among each other. For ensuring the quality of service (QoS) in IoRT applications, radio resources, for example, transmitting power allocation (PA), interference management, throughput maximization etc., should be efficiently employed and allocated among UE. Traditionally, resource allocation has been formulated using optimization problems, which are then solved using mathematical computer techniques. However, those optimization problems are generally nonconvex as well as nondeterministic polynomial-time hardness (NP-hard). In this paper, one of the most crucial challenges in radio resource management is the emitting power of an antenna called PA, considering that the interfering multiple access channel (IMAC) has been considered. In addition, UE has a natural movement behavior that directly impacts the channel condition between remote radio head (RRH) and UE. Additionally, we have considered two well-known UE mobility models i) random walk and ii) modified Gauss-Markov (GM). As a result, the simulation environment is more realistic and complex. A data-driven as well as model-free continuous action based deep reinforcement learning algorithm called twin delayed deep deterministic policy gradient (TD3) has been proposed that is the combination of policy gradient, actor-critics, as well as double deep Q-learning (DDQL). It optimizes the PA for i) stationary UE, ii) the UE movements according to random walk model, and ii) the UE movement based on the modified GM model. Simulation results show that the proposed TD3 method outperforms model-based techniques like weighted MMSE (WMMSE) and fractional programming (FP) as well as model-free algorithms, for example, deep Q network (DQN) and DDPG in terms of average sum-rate performance