4 research outputs found

    Energy Minimization in D2D-Assisted Cache-Enabled Internet of Things: A Deep Reinforcement Learning Approach

    Get PDF
    Mobile edge caching (MEC) and device-to-device (D2D) communications are two potential technologies to resolve traffic overload problems in the Internet of Things. Previous works usually investigate them separately with MEC for traffic offloading and D2D for information transmission. In this article, a joint framework consisting of MEC and cache-enabled D2D communications is proposed to minimize the energy cost of systematic traffic transmission, where file popularity and user preference are the critical criteria for small base stations (SBSs) and user devices, respectively. Under this framework, we propose a novel caching strategy, where the Markov decision process is applied to model the requesting behaviors. A novel scheme based on reinforcement learning (RL) is proposed to reveal the popularity of files as well as users' preference. In particular, a Q-learning algorithm and a deep Q-network algorithm are, respectively, applied to user devices and the SBS due to different complexities of status. To save the energy cost of systematic traffic transmission, users acquire partial traffic through D2D communications based on the cached contents and user distribution. Taking the memory limits, D2D available files, and status changing into consideration, the proposed RL algorithm enables user devices and the SBS to prefetch the optimal files while learning, which can reduce the energy cost significantly. Simulation results demonstrate the superior energy saving performance of the proposed RL-based algorithm over other existing methods under various conditions
    corecore