1 research outputs found

    Resource Allocation Using Gradient Boosting Aided Deep Q-Network for IoT in C-RANs

    Full text link
    In this paper, we investigate dynamic resource allocation (DRA) problems for Internet of Things (IoT) in real-time cloud radio access networks (C-RANs), by combining gradient boosting approximation and deep reinforcement learning to solve the following two major problems. Firstly, in C-RANs, the decision making process of resource allocation is time-consuming and computational-expensive, motivating us to use an approximation method, i.e. the gradient boosting decision tree (GBDT) to approximate the solutions of second order cone programming (SOCP) problem. Moreover, considering the innumerable states in real-time C-RAN systems, we employ a deep reinforcement learning framework, i.e., deep Q-network (DQN) to generate a robust policy that controls the status of remote radio heads (RRHs). We propose a GBDT-based DQN framework for the DRA problem, where the heavy computation to solve SOCP problems is cut down and great power consumption is saved in the whole C-RAN system. We demonstrate that the generated policy is error-tolerant even the gradient boosting regression may not be strictly subject to the constraints of the original problem. Comparisons between the proposed method and existing baseline methods confirm the advantages of our method
    corecore