261 research outputs found

    A Transfer Learning Approach for Cache-Enabled Wireless Networks

    Full text link
    Locally caching contents at the network edge constitutes one of the most disruptive approaches in 55G wireless networks. Reaping the benefits of edge caching hinges on solving a myriad of challenges such as how, what and when to strategically cache contents subject to storage constraints, traffic load, unknown spatio-temporal traffic demands and data sparsity. Motivated by this, we propose a novel transfer learning-based caching procedure carried out at each small cell base station. This is done by exploiting the rich contextual information (i.e., users' content viewing history, social ties, etc.) extracted from device-to-device (D2D) interactions, referred to as source domain. This prior information is incorporated in the so-called target domain where the goal is to optimally cache strategic contents at the small cells as a function of storage, estimated content popularity, traffic load and backhaul capacity. It is shown that the proposed approach overcomes the notorious data sparsity and cold-start problems, yielding significant gains in terms of users' quality-of-experience (QoE) and backhaul offloading, with gains reaching up to 22%22\% in a setting consisting of four small cell base stations.Comment: some small fixes in notatio

    A template-based sub-optimal content distribution for D2D content sharing networks

    Get PDF
    We propose Templatized Elastic Assignment (TEA), a light-weight scheme for mobile cooperative caching networks. It consists of two components, (1) one to calculate a sub-optimal distribution of each situation and (2) finegrained ID management by base stations (BSs) to achieve the calculated distribution. The former is modeled from findings that the desirable distribution plotted in a semilog graph forms a downward straight line with which the slope and Yintercept epend on the bias of request and total cache capacity, respectively. The latter is inspired from the identifier (ID)-based scheme, which ties devices and content by a randomly associated ID. TEA achieved the calculated distribution with IDs by using the annotation from base stations (BSs), which is preliminarily calculated by the template in a fine-grained density of devices. Moreover, such fine-grained management secondarily standardizes the cached content among multiple densities and enables the reuse of the content in devices from other BSs. Evaluation results indicate that our scheme reduces (1) 8.3 times more traffic than LFU and achieves almost the same amount of traffic reduction as with the genetic algorithm, (2) 45 hours of computation into a few seconds, and (3) at most 70% of content replacement across multiple BSs

    Joint Optimization of Caching Placement and Trajectory for UAV-D2D Networks

    Get PDF
    With the exponential growth of data traffic in wireless networks, edge caching has been regarded as a promising solution to offload data traffic and alleviate backhaul congestion, where the contents can be cached by an unmanned aerial vehicle (UAV) and user terminal (UT) with local data storage. In this article, a cooperative caching architecture of UAV and UTs with scalable video coding (SVC) is proposed, which provides the high transmission rate content delivery and personalized video viewing qualities in hotspot areas. In the proposed cache-enabling UAV-D2D networks, we formulate a joint optimization problem of UT caching placement, UAV trajectory, and UAV caching placement to maximize the cache utility. To solve this challenging mixed integer nonlinear programming problem, the optimization problem is decomposed into three sub-problems. Specifically, we obtain UT caching placement by a many-to-many swap matching algorithm, then obtain the UAV trajectory and UAV caching placement by approximate convex optimization and dynamic programming, respectively. Finally, we propose a low complexity iterative algorithm for the formulated optimization problem to improve the system capacity, fully utilize the cache space resource, and provide diverse delivery qualities for video traffic. Simulation results reveal that: i) the proposed cooperative caching architecture of UAV and UTs obtains larger cache utility than the cache-enabling UAV networks with same data storage capacity and radio resource; ii) compared with the benchmark algorithms, the proposed algorithm improves cache utility and reduces backhaul offloading ratio effectively

    Caching deployment algorithm based on user preference in device-to-device networks

    Get PDF
    In cache enabled D2D communication networks, the cache space in a mobile terminal is relatively small compared with the huge amounts of multimedia contents. As such, a strategy for caching the diverse contents in a multiple cache-enabled mobile terminals, namely caching deployment, will have a substantial impact to network performance. In this paper, a user preference aware caching deployment algorithm is proposed for D2D caching networks. Firstly, based on the concept of the user preference, the definition of user interest similarity is given, in which it can be used to evaluate the similarity of user preferences. Then a content cache utility of a mobile terminal is defined by taking the communication coverage of this mobile terminal and the user interest similarity of its adjacent mobile terminals into consideration. The logarithmic utility maximization problem for caching deployment is formulated. Subsequently, we relax the logarithmic utility maximization problem, and obtain a low complexity near-optimal solution via dual decomposition method. The convergence of the proposed caching deployment algorithm is validated by simulation results. Compared with the existing caching placement methods, the proposed algorithm can achieve significant improvement on cache hit ratio, content access delay and traffic offloading gain

    Efficient Traffic Management Algorithms for the Core Network using Device-to-Device Communication and Edge Caching

    Get PDF
    Exponentially growing number of communicating devices and the need for faster, more reliable and secure communication are becoming major challenges for current mobile communication architecture. More number of connected devices means more bandwidth and a need for higher Quality of Service (QoS) requirements, which bring new challenges in terms of resource and traffic management. Traffic offload to the edge has been introduced to tackle this demand-explosion that let the core network offload some of the contents to the edge to reduce the traffic congestion. Device-to-Device (D2D) communication and edge caching, has been proposed as promising solutions for offloading data. D2D communication refers to the communication infrastructure where the users in proximity communicate with each other directly. D2D communication improves overall spectral efficiency, however, it introduces additional interference in the system. To enable D2D communication, efficient resource allocation must be introduced in order to minimize the interference in the system and this benefits the system in terms of bandwidth efficiency. In the first part of this thesis, low complexity resource allocation algorithm using stable matching is proposed to optimally assign appropriate uplink resources to the devices in order to minimize interference among D2D and cellular users. Edge caching has recently been introduced as a modification of the caching scheme in the core network, which enables a cellular Base Station (BS) to keep copies of the contents in order to better serve users and enhance Quality of Experience (QoE). However, enabling BSs to cache data on the edge of the network brings new challenges especially on deciding on which and how the contents should be cached. Since users in the same cell may share similar content-needs, we can exploit this temporal-spatial correlation in the favor of caching system which is referred to local content popularity. Content popularity is the most important factor in the caching scheme which helps the BSs to cache appropriate data in order to serve the users more efficiently. In the edge caching scheme, the BS does not know the users request-pattern in advance. To overcome this bottleneck, a content popularity prediction using Markov Decision Process (MDP) is proposed in the second part of this thesis to let the BS know which data should be cached in each time-slot. By using the proposed scheme, core network access request can be significantly reduced and it works better than caching based on historical data in both stable and unstable content popularity

    User preference aware caching deployment for device-to-device caching networks

    Get PDF
    Content caching in the device-to-device (D2D) cellular networks can be utilized to improve the content delivery efficiency and reduce traffic load of cellular networks. In such cache-enabled D2D cellular networks, how to cache the diversity contents in the multiple cache-enabled mobile terminals, namely, the caching deployment, has a substantial impact on the network performance. In this paper, a user preference aware caching deployment algorithm is proposed for D2D caching networks. First, the definition of the user interest similarity is given based on the user preference. Then, a content cache utility of a mobile terminal is defined by taking the transmission coverage region of this mobile terminal and the user interest similarity of its adjacent mobile terminals into consideration. A general cache utility maximization problem with joint caching deployment and cache space allocation is formulated, where the special logarithmic utility function is integrated. In doing so, the caching deployment and the cache space allocation can be decoupled by equal cache space allocation. Subsequently, we relax the logarithmic utility maximization problem, and obtain a low complexity near-optimal solution via a dual decomposition method. Compared with the existing caching placement methods, the proposed algorithm can achieve significant improvement on cache hit ratio, content access delay, and traffic offloading gain
    corecore