220 research outputs found

    Centralized coded caching for heterogeneous lossy requests

    Get PDF
    Centralized coded caching of popular contents is studied for users with heterogeneous distortion requirements, corresponding to diverse processing and display capabilities of mobile devices. Users' distortion requirements are assumed to be fixed and known, while their particular demands are revealed only after the placement phase. Modeling each file in the database as an independent and identically distributed Gaussian vector, the minimum delivery rate that can satisfy any demand combination within the corresponding distortion target is studied. The optimal delivery rate is characterized for the special case of two users and two files for any pair of distortion requirements. For the general setting with multiple users and files, a layered caching and delivery scheme, which exploits the successive refinability of Gaussian sources, is proposed. This scheme caches each content in multiple layers, and it is optimized by solving two subproblems: lossless caching of each layer with heterogeneous cache capacities, and allocation of available caches among layers. The delivery rate minimization problem for each layer is solved numerically, while two schemes, called the proportional cache allocation (PCA) and ordered cache allocation (OCA), are proposed for cache allocation. These schemes are compared with each other and the cut-set bound through numerical simulations

    Fundamental Limits of Coded Caching: Improved Delivery Rate-Cache Capacity Trade-off

    Get PDF
    A centralized coded caching system, consisting of a server delivering N popular files, each of size F bits, to K users through an error-free shared link, is considered. It is assumed that each user is equipped with a local cache memory with capacity MF bits, and contents can be proactively cached into these caches over a low traffic period; however, without the knowledge of the user demands. During the peak traffic period each user requests a single file from the server. The goal is to minimize the number of bits delivered by the server over the shared link, known as the delivery rate, over all user demand combinations. A novel coded caching scheme for the cache capacity of M= (N-1)/K is proposed. It is shown that the proposed scheme achieves a smaller delivery rate than the existing coded caching schemes in the literature when K > N >= 3. Furthermore, we argue that the delivery rate of the proposed scheme is within a constant multiplicative factor of 2 of the optimal delivery rate for cache capacities 1/K N >= 3.Comment: To appear in IEEE Transactions on Communication

    Coded Caching for a Large Number Of Users

    Full text link
    Information theoretic analysis of a coded caching system is considered, in which a server with a database of N equal-size files, each F bits long, serves K users. Each user is assumed to have a local cache that can store M files, i.e., capacity of MF bits. Proactive caching to user terminals is considered, in which the caches are filled by the server in advance during the placement phase, without knowing the user requests. Each user requests a single file, and all the requests are satisfied simultaneously through a shared error-free link during the delivery phase. First, centralized coded caching is studied assuming both the number and the identity of the active users in the delivery phase are known by the server during the placement phase. A novel group-based centralized coded caching (GBC) scheme is proposed for a cache capacity of M = N/K. It is shown that this scheme achieves a smaller delivery rate than all the known schemes in the literature. The improvement is then extended to a wider range of cache capacities through memory-sharing between the proposed scheme and other known schemes in the literature. Next, the proposed centralized coded caching idea is exploited in the decentralized setting, in which the identities of the users that participate in the delivery phase are assumed to be unknown during the placement phase. It is shown that the proposed decentralized caching scheme also achieves a delivery rate smaller than the state-of-the-art. Numerical simulations are also presented to corroborate our theoretical results

    Cache-Aided Delivery Networks with Correlated Content in a Shared Cache Framework

    Get PDF
    Internet traffic is growing exponentially due to the penetration of powerful internet-connected devices and cutting-edge technologies. Additionally, the rise in internet usage has coincided with a shift in the nature of data traffic from voice-based to content-based usage, putting significant stress on delivery networks. Despite the infrastructural advancements in communication networks over the past few years, content delivery networks (CDNs) still face challenges in keeping up with the high delivery data rates and suffer from the imbalanced network load between off-peak hours and peak hours. In this regard, content caching has emerged as an efficient technique to combat the high delivery date rates and maintain a balanced network load while improving the quality of services (QoS) by storing some popular content close to the end users. Caching networks operate in two phases; the placement phase during off-peak hours before users reveal their demands and the delivery phase, which is accomplished when users’ demands are revealed to the server during peak hours. As the server is unaware of the demands during the placement phase, this phase must be designed carefully to minimize the delivery rate regardless of the requested content during peak hours. This dissertation studies cache-aided delivery networks with correlated content in a shared cache framework. A shared cache framework is beneficial in the current and next-generation wireless networks as it provides a local cache to all users within small base stations (SBSs), relieving strain on the backhaul. Furthermore, the library of a caching network could consist of content with a high degree of similarity in many practical applications; Therefore, exploiting the similarity among library content can also be leveraged to reduce the delivery rate in such networks. In this dissertation, we look at the proposed caching network from an information-theoretic perspective and formulate it as a distributed source coding problem with side information at the decoder. Then, the critical question arises as to what should be cached as side information to reduce the delivery rate of the network efficiently. To answer this question, we propose an automatic clustering scheme using artificial intelligence (AI)-based optimization techniques to identify the selected side information for the entire library. We comprehensively evaluate the performance of the general clustering framework in a separate chapter by considering different datasets and distance measures. The general clustering framework enables us to develop two novel clustering schemes as a part of the placement phase of the proposed caching networks under different settings throughout this study, considering both the similarity and popularity of the library content. Upon identifying the selected side information for such networks, the next question that should be answered is how we should place the side information into caches; And consequently, what is the delivery strategy for this placement scheme? We have furnished our answer to these questions by considering three different caching networks: first, a network in a single shared cache framework under lossy caching. Next is a network with multiple shared caches under uniform popularity, and finally, a network with multiple shared caches under non-uniform preferences. In such networks, we address the placement and delivery strategy to show the trade-off between the delivery rate and the memory size of the system. We calculate the peak and expected rates of the studied networks by considering the rate-distortion function and caching strategy. We also introduce the optimum library partitioning formulated to minimize the peak delivery rate in the system. The performance analysis and extensive simulations of the proposed solution confirm that our scheme provides a considerable boost in network efficiency compared to legacy caching schemes due to our problem formulation and the careful extraction of side information during the placement phase
    • …
    corecore