20 research outputs found

    Mobility Increases the Data Offloading Ratio in D2D Caching Networks

    Full text link
    Caching at mobile devices, accompanied by device-to-device (D2D) communications, is one promising technique to accommodate the exponentially increasing mobile data traffic. While most previous works ignored user mobility, there are some recent works taking it into account. However, the duration of user contact times has been ignored, making it difficult to explicitly characterize the effect of mobility. In this paper, we adopt the alternating renewal process to model the duration of both the contact and inter-contact times, and investigate how the caching performance is affected by mobility. The data offloading ratio, i.e., the proportion of requested data that can be delivered via D2D links, is taken as the performance metric. We first approximate the distribution of the communication time for a given user by beta distribution through moment matching. With this approximation, an accurate expression of the data offloading ratio is derived. For the homogeneous case where the average contact and inter-contact times of different user pairs are identical, we prove that the data offloading ratio increases with the user moving speed, assuming that the transmission rate remains the same. Simulation results are provided to show the accuracy of the approximate result, and also validate the effect of user mobility.Comment: 6 pages, 5 figures, accepted to IEEE Int. Conf. Commun. (ICC), Paris, France, May 201

    Dynamic Coded Caching in Wireless Networks

    Get PDF
    We consider distributed and dynamic caching of coded content at small base stations (SBSs) in an area served by a macro base station (MBS). Specifically, content is encoded using a maximum distance separable code and cached according to a time-to-live (TTL) cache eviction policy, which allows coded packets to be removed from the caches at periodic times. Mobile users requesting a particular content download coded packets from SBSs within communication range. If additional packets are required to decode the file, these are downloaded from the MBS. We formulate an optimization problem that is efficiently solved numerically, providing TTL caching policies minimizing the overall network load. We demonstrate that distributed coded caching using TTL caching policies can offer significant reductions in terms of network load when request arrivals are bursty. We show how the distributed coded caching problem utilizing TTL caching policies can be analyzed as a specific single cache, convex optimization problem. Our problem encompasses static caching and the single cache as special cases. We prove that, interestingly, static caching is optimal under a Poisson request process, and that for a single cache the optimization problem has a surprisingly simple solution.Comment: To appear in IEEE Transactions on Communication
    corecore