10 research outputs found
Optimizing MDS Coded Caching in Wireless Networks with Device-to-Device Communication
We consider the caching of content in the mobile devices in a dense wireless network using maximum distance separable (MDS) codes. We focus on an area, served by a base station (BS), where mobile devices move around according to a random mobility model. Users requesting a particular file download coded packets from caching devices within a communication range, using device-to-device communication. If additional packets are required to decode the file, these are downloaded from the BS. We analyze the device mobility and derive a good approximation of the distribution of caching devices within the communication range of mobile devices at any given time. We then optimize the MDS codes to minimize the network load under a cache size constraint and show that using optimized MDS codes results in significantly lower network load compared to when caching the most popular files. We further show numerically that caching coded packets of each file on all mobile devices, i.e., maximal spreading, is optimal
Lifted MDS Codes over Finite Fields
MDS codes are elegant constructions in coding theory and have mode important
applications in cryptography, network coding, distributed data storage,
communication systems et. In this study, a method is given which MDS codes are
lifted to a higher finite field. The presented method satisfies the protection
of the distance and creating the MDS code over the by using MDS code over
$F_p.
Centralized Coded Caching with User Cooperation
In this paper, we consider the coded-caching broadcast network with user
cooperation, where a server connects with multiple users and the users can
cooperate with each other through a cooperation network. We propose a
centralized coded caching scheme based on a new deterministic placement
strategy and a parallel delivery strategy. It is shown that the new scheme
optimally allocate the communication loads on the server and users, obtaining
cooperation gain and parallel gain that greatly reduces the transmission delay.
Furthermore, we show that the number of users who parallelly send information
should decrease when the users' caching size increases. In other words, letting
more users parallelly send information could be harmful. Finally, we derive a
constant multiplicative gap between the lower bound and upper bound on the
transmission delay, which proves that our scheme is order optimal.Comment: 9 pages, submitted to ITW201
Private Information Retrieval in Wireless Coded Caching
We consider private information retrieval (PIR) in a content delivery scenario where, to reduce the backhaul usage, data is cached using maximum distance separable codes in a number of small-cell base stations (SBSs). We present a PIR protocol that allows the user to retrieve files of different popularities from the network without revealing the identity of the desired file to curious SBSs that potentially collaborate. We formulate an optimization problem to optimize the content placement and the number of queries of the protocol such that the backhaul rate is minimized. We further prove that, contrary to the case of no PIR, uniform content placement is optimal. Compared to a recently proposed protocol by Kumar et al. the presented protocol gives a reduced backhaul rate
Dynamic Coded Caching in Wireless Networks
We consider distributed and dynamic caching of coded content at small base
stations (SBSs) in an area served by a macro base station (MBS). Specifically,
content is encoded using a maximum distance separable code and cached according
to a time-to-live (TTL) cache eviction policy, which allows coded packets to be
removed from the caches at periodic times. Mobile users requesting a particular
content download coded packets from SBSs within communication range. If
additional packets are required to decode the file, these are downloaded from
the MBS. We formulate an optimization problem that is efficiently solved
numerically, providing TTL caching policies minimizing the overall network
load. We demonstrate that distributed coded caching using TTL caching policies
can offer significant reductions in terms of network load when request arrivals
are bursty. We show how the distributed coded caching problem utilizing TTL
caching policies can be analyzed as a specific single cache, convex
optimization problem. Our problem encompasses static caching and the single
cache as special cases. We prove that, interestingly, static caching is optimal
under a Poisson request process, and that for a single cache the optimization
problem has a surprisingly simple solution.Comment: To appear in IEEE Transactions on Communication
Dynamic Coded Caching in Wireless Networks
We consider distributed and dynamic caching of coded content at small base stations (SBSs) in an area served by a macro base station (MBS). Specifically, content is encoded using a maximum distance separable code and cached according to a time-to-live (TTL) cache eviction policy, which allows coded packets to be removed from the caches at periodic times. Mobile users requesting a particular content download coded packets from SBSs within communication range. If additional packets are required to decode the file, these are downloaded from the MBS. We formulate an optimization problem that is efficiently solved numerically, providing TTL caching policies minimizing the overall network load. We demonstrate that distributed coded caching using TTL caching policies can offer significant reductions in terms of network load when request arrivals are bursty. We show how the distributed coded caching problem utilizing TTL caching policies can be analyzed as a specific single cache, convex optimization problem. Our problem encompasses static caching and the single cache as special cases. We prove that, interestingly, static caching is optimal under a Poisson request process, and that for a single cache the optimization problem has a surprisingly simple solution
A Survey on Applications of Cache-Aided NOMA
Contrary to orthogonal multiple-access (OMA), non-orthogonal multiple-access (NOMA) schemes can serve a pool of users without exploiting the scarce frequency or time domain resources. This is useful in meeting the future network requirements (5G and beyond systems), such as, low latency, massive connectivity, users' fairness, and high spectral efficiency. On the other hand, content caching restricts duplicate data transmission by storing popular contents in advance at the network edge which reduces data traffic. In this survey, we focus on cache-aided NOMA-based wireless networks which can reap the benefits of both cache and NOMA; switching to NOMA from OMA enables cache-aided networks to push additional files to content servers in parallel and improve the cache hit probability. Beginning with fundamentals of the cache-aided NOMA technology, we summarize the performance goals of cache-aided NOMA systems, present the associated design challenges, and categorize the recent related literature based on their application verticals. Concomitant standardization activities and open research challenges are highlighted as well