2,376 research outputs found
The Subset Assignment Problem for Data Placement in Caches
We introduce the subset assignment problem in which items of varying sizes are placed in a set of bins with limited capacity. Items can be replicated and placed in any subset of the bins. Each (item, subset) pair has an associated cost. Not assigning an item to any of the bins is not free in general and can potentially be the most expensive option. The goal is to minimize the total cost of assigning items to subsets without exceeding the bin capacities. This problem is motivated by the design of caching systems composed of banks of memory with varying cost/performance specifications. The ability to replicate a data item in more than one memory bank can benefit the overall performance of the system with a faster recovery time in the event of a memory failure. For this setting, the number n of data objects (items) is very large and the number d of memory banks (bins) is a small constant (on the order of 3 or 4). Therefore, the goal is to determine an optimal assignment in time that minimizes dependence on n. The integral version of this problem is NP-hard since it is a generalization of the knapsack problem. We focus on an efficient solution to the LP relaxation as the number of fractionally assigned items will be at most d. If the data objects are small with respect to the size of the memory banks, the effect of excluding the fractionally assigned data items from the cache will be small. We give an algorithm that solves the LP relaxation and runs in time O(binom{3^d}{d+1} poly(d) n log(n) log(nC) log(Z)), where Z is the maximum item size and C the maximum storage cost
Finite Length Analysis of Caching-Aided Coded Multicasting
In this work, we study a noiseless broadcast link serving users whose
requests arise from a library of files. Every user is equipped with a cache
of size files each. It has been shown that by splitting all the files into
packets and placing individual packets in a random independent manner across
all the caches, it requires at most file transmissions for any set of
demands from the library. The achievable delivery scheme involves linearly
combining packets of different files following a greedy clique cover solution
to the underlying index coding problem. This remarkable multiplicative gain of
random placement and coded delivery has been established in the asymptotic
regime when the number of packets per file scales to infinity.
In this work, we initiate the finite-length analysis of random caching
schemes when the number of packets is a function of the system parameters
. Specifically, we show that existing random placement and clique cover
delivery schemes that achieve optimality in the asymptotic regime can have at
most a multiplicative gain of if the number of packets is sub-exponential.
Further, for any clique cover based coded delivery and a large class of random
caching schemes, that includes the existing ones, we show that the number of
packets required to get a multiplicative gain of is at least
. We exhibit a random placement and an efficient clique cover based
coded delivery scheme that approximately achieves this lower bound. We also
provide tight concentration results that show that the average (over the random
caching involved) number of transmissions concentrates very well requiring only
polynomial number of packets in the rest of the parameters.Comment: A shorter version appeared in the 52nd Annual Allerton Conference on
Communication, Control, and Computing (Allerton), 201
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
Replicating or caching popular content in memories distributed across the
network is a technique to reduce peak network loads. Conventionally, the main
performance gain of this caching was thought to result from making part of the
requested data available closer to end users. Instead, we recently showed that
a much more significant gain can be achieved by using caches to create
coded-multicasting opportunities, even for users with different demands,
through coding across data streams. These coded-multicasting opportunities are
enabled by careful content overlap at the various caches in the network,
created by a central coordinating server.
In many scenarios, such a central coordinating server may not be available,
raising the question if this multicasting gain can still be achieved in a more
decentralized setting. In this paper, we propose an efficient caching scheme,
in which the content placement is performed in a decentralized manner. In other
words, no coordination is required for the content placement. Despite this lack
of coordination, the proposed scheme is nevertheless able to create
coded-multicasting opportunities and achieves a rate close to the optimal
centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin
Fundamental Limits of Caching
Caching is a technique to reduce peak traffic rates by prefetching popular
content into memories at the end users. Conventionally, these memories are used
to deliver requested content in part from a locally cached copy rather than
through the network. The gain offered by this approach, which we term local
caching gain, depends on the local cache size (i.e, the memory available at
each individual user). In this paper, we introduce and exploit a second,
global, caching gain not utilized by conventional caching schemes. This gain
depends on the aggregate global cache size (i.e., the cumulative memory
available at all users), even though there is no cooperation among the users.
To evaluate and isolate these two gains, we introduce an
information-theoretic formulation of the caching problem focusing on its basic
structure. For this setting, we propose a novel coded caching scheme that
exploits both local and global caching gains, leading to a multiplicative
improvement in the peak rate compared to previously known schemes. In
particular, the improvement can be on the order of the number of users in the
network. Moreover, we argue that the performance of the proposed scheme is
within a constant factor of the information-theoretic optimum for all values of
the problem parameters.Comment: To appear in IEEE Transactions on Information Theor
Edge-Caching Wireless Networks: Performance Analysis and Optimization
Edge-caching has received much attention as an efficient technique to reduce
delivery latency and network congestion during peak-traffic times by bringing
data closer to end users. Existing works usually design caching algorithms
separately from physical layer design. In this paper, we analyse edge-caching
wireless networks by taking into account the caching capability when designing
the signal transmission. Particularly, we investigate multi-layer caching where
both base station (BS) and users are capable of storing content data in their
local cache and analyse the performance of edge-caching wireless networks under
two notable uncoded and coded caching strategies. Firstly, we propose a coded
caching strategy that is applied to arbitrary values of cache size. The
required backhaul and access rates are derived as a function of the BS and user
cache size. Secondly, closed-form expressions for the system energy efficiency
(EE) corresponding to the two caching methods are derived. Based on the derived
formulas, the system EE is maximized via precoding vectors design and
optimization while satisfying a predefined user request rate. Thirdly, two
optimization problems are proposed to minimize the content delivery time for
the two caching strategies. Finally, numerical results are presented to verify
the effectiveness of the two caching methods.Comment: to appear in IEEE Trans. Wireless Commu
Fundamental Limits of Caching with Secure Delivery
Caching is emerging as a vital tool for alleviating the severe capacity
crunch in modern content-centric wireless networks. The main idea behind
caching is to store parts of popular content in end-users' memory and leverage
the locally stored content to reduce peak data rates. By jointly designing
content placement and delivery mechanisms, recent works have shown order-wise
reduction in transmission rates in contrast to traditional methods. In this
work, we consider the secure caching problem with the additional goal of
minimizing information leakage to an external wiretapper. The fundamental cache
memory vs. transmission rate trade-off for the secure caching problem is
characterized. Rather surprisingly, these results show that security can be
introduced at a negligible cost, particularly for large number of files and
users. It is also shown that the rate achieved by the proposed caching scheme
with secure delivery is within a constant multiplicative factor from the
information-theoretic optimal rate for almost all parameter values of practical
interest
- …