5 research outputs found

    Device-to-Device Secure Coded Caching

    Full text link
    This paper studies device to device (D2D) coded-caching with information theoretic security guarantees. A broadcast network consisting of a server, which has a library of files, and end users equipped with cache memories, is considered. Information theoretic security guarantees for confidentiality are imposed upon the files. The server populates the end user caches, after which D2D communications enable the delivery of the requested files. Accordingly, we require that a user must not have access to files it did not request, i.e., secure caching. First, a centralized coded caching scheme is provided by jointly optimizing the cache placement and delivery policies. Next, a decentralized coded caching scheme is developed that does not require the knowledge of the number of active users during the caching phase. Both schemes utilize non-perfect secret sharing and one-time pad keying, to guarantee secure caching. Furthermore, the proposed schemes provide secure delivery as a side benefit, i.e., any external entity which overhears the transmitted signals during the delivery phase cannot obtain any information about the database files. The proposed schemes provide the achievable upper bound on the minimum delivery sum rate. Lower bounds on the required transmission sum rate are also derived using cut-set arguments indicating the multiplicative gap between the lower and upper bounds. Numerical results indicate that the gap vanishes with increasing memory size. Overall, the work demonstrates the effectiveness of D2D communications in cache-aided systems even when confidentiality constraints are imposed at the participating nodes and against external eavesdroppers.Comment: 12 pages, 5 Figures, under revie

    Towards Practical File Packetizations in Wireless Device-to-Device Caching Networks

    Full text link
    We consider wireless device-to-device (D2D) caching networks with single-hop transmissions. Previous work has demonstrated that caching and coded multicasting can significantly increase per user throughput. However, the state-of-the-art coded caching schemes for D2D networks are generally impractical because content files are partitioned into an exponential number of packets with respect to the number of users if both library and memory sizes are fixed. In this paper, we present two combinatorial approaches of D2D coded caching network design with reduced packetizations and desired throughput gain compared to the conventional uncoded unicasting. The first approach uses a "hypercube" design, where each user caches a "hyperplane" in this hypercube and the intersections of "hyperplanes" represent coded multicasting codewords. In addition, we extend the hypercube approach to a decentralized design. The second approach uses the Ruzsa-Szem\'eredi graph to define the cache placement. Disjoint matchings on this graph represent coded multicasting codewords. Both approaches yield an exponential reduction of packetizations while providing a per-user throughput that is comparable to the state-of-the-art designs in the literature. Furthermore, we apply spatial reuse to the new D2D network designs to further reduce the required packetizations and significantly improve per user throughput for some parameter regimes.Comment: 32 pages, 5 figure

    Device-to-Device Coded Caching with Distinct Cache Sizes

    Full text link
    This paper considers a cache-aided device-to-device (D2D) system where the users are equipped with cache memories of different size. During low traffic hours, a server places content in the users' cache memories, knowing that the files requested by the users during peak traffic hours will have to be delivered by D2D transmissions only. The worst-case D2D delivery load is minimized by jointly designing the uncoded cache placement and linear coded D2D delivery. Next, a novel lower bound on the D2D delivery load with uncoded placement is proposed and used in explicitly characterizing the minimum D2D delivery load (MD2DDL) with uncoded placement for several cases of interest. In particular, having characterized the MD2DDL for equal cache sizes, it is shown that the same delivery load can be achieved in the network with users of unequal cache sizes, provided that the smallest cache size is greater than a certain threshold. The MD2DDL is also characterized in the small cache size regime, the large cache size regime, and the three-user case. Comparisons of the server-based delivery load with the D2D delivery load are provided. Finally, connections and mathematical parallels between cache-aided D2D systems and coded distributed computing (CDC) systems are discussed.Comment: 30 pages, 5 figures, submitted to IEEE Transactions of Communications, Mar. 201

    Coded Caching for Broadcast Networks with User Cooperation

    Full text link
    In this paper, we investigate the transmission delay of cache-aided broadcast networks with user cooperation. Novel coded caching schemes are proposed for both centralized and decentralized caching settings, by efficiently exploiting time and cache resources and creating parallel data delivery at the server and users. We derive a lower bound on the transmission delay and show that the proposed centralized coded caching scheme is \emph{order-optimal} in the sense that it achieves a constant multiplicative gap within the lower bound. Our decentralized coded caching scheme is also order-optimal when each user's cache size is larger than the threshold N(1βˆ’1/(K+1)Kβˆ’1)N(1-\sqrt[{K-1}]{ {1}/{(K+1)}}) (approaching 0 as Kβ†’βˆžK\to \infty), where KK is the total number of users and NN is the size of file library. Moreover, for both the centralized and decentralized caching settings, our schemes obtain an additional \emph{cooperation gain} offered by user cooperation and an additional \emph{parallel gain} offered by the parallel transmission among the server and users. It is shown that in order to reduce the transmission delay, the number of users parallelly sending signals should be appropriately chosen according to user's cache size, and alway letting more users parallelly send information could cause high transmission delay.Comment: 43 pages, 5 figure

    Optimal Throughput--Outage Analysis of Cache-Aided Wireless Multi-Hop D2D Networks -- Derivations of Scaling Laws

    Full text link
    Cache-aided wireless device-to-device (D2D) networks have demonstrated promising performance improvement for video distribution compared to conventional distribution methods. Understanding the fundamental scaling behavior of such networks is thus of paramount importance. However, existing scaling laws for multi-hop networks have not been found to be optimal even for the case of Zipf popularity distributions (gaps between upper and lower bounds are not constants); furthermore, there are no scaling law results for such networks for the more practical case of a Mandelbrot-Zipf (MZipf) popularity distribution. We thus in this work investigate the throughput-outage performance for cache-aided wireless D2D networks adopting multi-hop communications, with the MZipf popularity distribution for file requests and users distributed according to Poisson point process. We propose an achievable content caching and delivery scheme and analyze its performance. By showing that the achievable performance is tight to the proposed outer bound, the optimal scaling law is obtained. Furthermore, since the Zipf distribution is a special case of the MZipf distribution, the optimal scaling law for the networks considering Zipf popularity distribution is also obtained, which closes the gap in the literature.Comment: A condensed version of this paper will be submitted to IEEE Transactions on Communication
    corecore