151 research outputs found

    A Delay-Aware Caching Algorithm for Wireless D2D Caching Networks

    Full text link
    Recently, wireless caching techniques have been studied to satisfy lower delay requirements and offload traffic from peak periods. By storing parts of the popular files at the mobile users, users can locate some of their requested files in their own caches or the caches at their neighbors. In the latter case, when a user receives files from its neighbors, device-to-device (D2D) communication is enabled. D2D communication underlaid with cellular networks is also a new paradigm for the upcoming 5G wireless systems. By allowing a pair of adjacent D2D users to communicate directly, D2D communication can achieve higher throughput, better energy efficiency and lower traffic delay. In this work, we propose a very efficient caching algorithm for D2D-enabled cellular networks to minimize the average transmission delay. Instead of searching over all possible solutions, our algorithm finds out the best pairs, which provide the best delay improvement in each loop to form a caching policy with very low transmission delay and high throughput. This algorithm is also extended to address a more general scenario, in which the distributions of fading coefficients and values of system parameters potentially change over time. Via numerical results, the superiority of the proposed algorithm is verified by comparing it with a naive algorithm, in which all users simply cache their favorite files

    Energy Minimization in D2D-Assisted Cache-Enabled Internet of Things: A Deep Reinforcement Learning Approach

    Get PDF
    Mobile edge caching (MEC) and device-to-device (D2D) communications are two potential technologies to resolve traffic overload problems in the Internet of Things. Previous works usually investigate them separately with MEC for traffic offloading and D2D for information transmission. In this article, a joint framework consisting of MEC and cache-enabled D2D communications is proposed to minimize the energy cost of systematic traffic transmission, where file popularity and user preference are the critical criteria for small base stations (SBSs) and user devices, respectively. Under this framework, we propose a novel caching strategy, where the Markov decision process is applied to model the requesting behaviors. A novel scheme based on reinforcement learning (RL) is proposed to reveal the popularity of files as well as users' preference. In particular, a Q-learning algorithm and a deep Q-network algorithm are, respectively, applied to user devices and the SBS due to different complexities of status. To save the energy cost of systematic traffic transmission, users acquire partial traffic through D2D communications based on the cached contents and user distribution. Taking the memory limits, D2D available files, and status changing into consideration, the proposed RL algorithm enables user devices and the SBS to prefetch the optimal files while learning, which can reduce the energy cost significantly. Simulation results demonstrate the superior energy saving performance of the proposed RL-based algorithm over other existing methods under various conditions
    • …
    corecore