188 research outputs found

    Fundamental Limits of Caching in Wireless D2D Networks

    Full text link
    We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an ``infrastructure'' setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this work, we consider a D2D ``infrastructure-less'' version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery, achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is therefore natural to ask whether these two gains are cumulative, i.e.,if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law).Comment: 45 pages, 5 figures, Submitted to IEEE Transactions on Information Theory, This is the extended version of the conference (ITW) paper arXiv:1304.585

    Caching and Coded Multicasting: Multiple Groupcast Index Coding

    Full text link
    The capacity of caching networks has received considerable attention in the past few years. A particularly studied setting is the case of a single server (e.g., a base station) and multiple users, each of which caches segments of files in a finite library. Each user requests one (whole) file in the library and the server sends a common coded multicast message to satisfy all users at once. The problem consists of finding the smallest possible codeword length to satisfy such requests. In this paper we consider the generalization to the case where each user places L1L \geq 1 requests. The obvious naive scheme consists of applying LL times the order-optimal scheme for a single request, obtaining a linear in LL scaling of the multicast codeword length. We propose a new achievable scheme based on multiple groupcast index coding that achieves a significant gain over the naive scheme. Furthermore, through an information theoretic converse we find that the proposed scheme is approximately optimal within a constant factor of (at most) 1818.Comment: 5 pages, 1 figure, to appear in GlobalSIP14, Dec. 201

    Fundamental Limits of Distributed Caching in D2D Wireless Networks

    Full text link
    We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop, users make arbitrary requests from a finite library of possible files and user devices cache information in the form of linear combinations of packets from the files in the library (coded caching). We consider the combined effect of coding in the caching and delivery phases, achieving "coded multicast gain", and of spatial reuse due to local short-range D2D communication. Somewhat counterintuitively, we show that the coded multicast gain and the spatial reuse gain do not cumulate, in terms of the throughput scaling laws. In particular, the spatial reuse gain shown in our previous work on uncoded random caching and the coded multicast gain shown in this paper yield the same scaling laws behavior, but no further scaling law gain can be achieved by using both coded caching and D2D spatial reuse.Comment: 5 pages, 3 figures, submitted to ITW 201

    Caching with Unknown Popularity Profiles in Small Cell Networks

    Full text link
    A heterogenous network is considered where the base stations (BSs), small base stations (SBSs) and users are distributed according to independent Poisson point processes (PPPs). We let the SBS nodes to posses high storage capacity and are assumed to form a distributed caching network. Popular data files are stored in the local cache of SBS, so that users can download the desired files from one of the SBS in the vicinity subject to availability. The offloading-loss is captured via a cost function that depends on a random caching strategy proposed in this paper. The cost function depends on the popularity profile, which is, in general, unknown. In this work, the popularity profile is estimated at the BS using the available instantaneous demands from the users in a time interval [0,τ][0,\tau]. This is then used to find an estimate of the cost function from which the optimal random caching strategy is devised. The main results of this work are the following: First it is shown that the waiting time τ\tau to achieve an ϵ>0\epsilon>0 difference between the achieved and optimal costs is finite, provided the user density is greater than a predefined threshold. In this case, τ\tau is shown to scale as N2N^2, where NN is the support of the popularity profile. Secondly, a transfer learning-based approach is proposed to obtain an estimate of the popularity profile used to compute the empirical cost function. A condition is derived under which the proposed transfer learning-based approach performs better than the random caching strategy.Comment: 6 pages, Proceedings of IEEE Global Communications Conference, 201

    Caching Eliminates the Wireless Bottleneck in Video Aware Wireless Networks

    Get PDF

    A Survey on Caching in Distributed Small Cell Networks

    Get PDF
    The exponential growth of mobile devices such as smartphones and tablets, coupled with proliferation of online social networks has considerably increased the traffic in cellular networks. In contrast to classical cellular traffic that was only based on voice and audio communications, the recent technologies have resulted in bandwidth-intensive services such as video streaming, and video conferencing increases the traffic among users. This traffic surge affects the capacity of existing wireless networks which makes it difficult to ensure the high quality-of-service (QoS) required by the cellular services. In order to handle with the limited capacity of existing cellular networks and keep up with the strict QoS requirements, in terms of data rate and delay tolerable application-specific delays, a new generation of wireless networks has emerged. To achieve the requirements of this new generation and provide efficient infrastructure support for this data deluge, several research challenges must be addressed and solved. In this paper a survey on literature about small cell networks in distributed environment is presented which focus on caching aspect to improve the performance. The related work for caching in distributed small cell networks is also presented

    A Learning-Based Approach to Caching in Heterogenous Small Cell Networks

    Full text link
    A heterogenous network with base stations (BSs), small base stations (SBSs) and users distributed according to independent Poisson point processes is considered. SBS nodes are assumed to possess high storage capacity and to form a distributed caching network. Popular files are stored in local caches of SBSs, so that a user can download the desired files from one of the SBSs in its vicinity. The offloading-loss is captured via a cost function that depends on the random caching strategy proposed here. The popularity profile of cached content is unknown and estimated using instantaneous demands from users within a specified time interval. An estimate of the cost function is obtained from which an optimal random caching strategy is devised. The training time to achieve an ϵ>0\epsilon>0 difference between the achieved and optimal costs is finite provided the user density is greater than a predefined threshold, and scales as N2N^2, where NN is the support of the popularity profile. A transfer learning-based approach to improve this estimate is proposed. The training time is reduced when the popularity profile is modeled using a parametric family of distributions; the delay is independent of NN and scales linearly with the dimension of the distribution parameter.Comment: 12 pages, 5 figures, published in IEEE Transactions on Communications, 2016. arXiv admin note: text overlap with arXiv:1504.0363
    corecore