102 research outputs found
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
Replicating or caching popular content in memories distributed across the
network is a technique to reduce peak network loads. Conventionally, the main
performance gain of this caching was thought to result from making part of the
requested data available closer to end users. Instead, we recently showed that
a much more significant gain can be achieved by using caches to create
coded-multicasting opportunities, even for users with different demands,
through coding across data streams. These coded-multicasting opportunities are
enabled by careful content overlap at the various caches in the network,
created by a central coordinating server.
In many scenarios, such a central coordinating server may not be available,
raising the question if this multicasting gain can still be achieved in a more
decentralized setting. In this paper, we propose an efficient caching scheme,
in which the content placement is performed in a decentralized manner. In other
words, no coordination is required for the content placement. Despite this lack
of coordination, the proposed scheme is nevertheless able to create
coded-multicasting opportunities and achieves a rate close to the optimal
centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin
Fundamental Limits of Caching
Caching is a technique to reduce peak traffic rates by prefetching popular
content into memories at the end users. Conventionally, these memories are used
to deliver requested content in part from a locally cached copy rather than
through the network. The gain offered by this approach, which we term local
caching gain, depends on the local cache size (i.e, the memory available at
each individual user). In this paper, we introduce and exploit a second,
global, caching gain not utilized by conventional caching schemes. This gain
depends on the aggregate global cache size (i.e., the cumulative memory
available at all users), even though there is no cooperation among the users.
To evaluate and isolate these two gains, we introduce an
information-theoretic formulation of the caching problem focusing on its basic
structure. For this setting, we propose a novel coded caching scheme that
exploits both local and global caching gains, leading to a multiplicative
improvement in the peak rate compared to previously known schemes. In
particular, the improvement can be on the order of the number of users in the
network. Moreover, we argue that the performance of the proposed scheme is
within a constant factor of the information-theoretic optimum for all values of
the problem parameters.Comment: To appear in IEEE Transactions on Information Theor
Private Function Retrieval
The widespread use of cloud computing services raises the question of how one
can delegate the processing tasks to the untrusted distributed parties without
breeching the privacy of its data and algorithms. Motivated by the algorithm
privacy concerns in a distributed computing system, in this paper, we introduce
the private function retrieval (PFR) problem, where a user wishes to
efficiently retrieve a linear function of messages from
non-communicating replicated servers while keeping the function hidden from
each individual server. The goal is to find a scheme with minimum communication
cost. To characterize the fundamental limits of the communication cost, we
define the capacity of PFR problem as the size of the message that can be
privately retrieved (which is the size of one file) normalized to the required
downloaded information bits. We first show that for the PFR problem with
messages, servers and a linear function with binary coefficients the
capacity is . Interestingly, this
is the capacity of retrieving one of messages from servers while
keeping the index of the requested message hidden from each individual server,
the problem known as private information retrieval (PIR). Then, we extend the
proposed achievable scheme to the case of arbitrary number of servers and
coefficients in the field with arbitrary and obtain
Coded Caching for Delay-Sensitive Content
Coded caching is a recently proposed technique that achieves significant
performance gains for cache networks compared to uncoded caching schemes.
However, this substantial coding gain is attained at the cost of large delivery
delay, which is not tolerable in delay-sensitive applications such as video
streaming. In this paper, we identify and investigate the tradeoff between the
performance gain of coded caching and the delivery delay. We propose a
computationally efficient caching algorithm that provides the gains of coding
and respects delay constraints. The proposed algorithm achieves the optimum
performance for large delay, but still offers major gains for small delay.
These gains are demonstrated in a practical setting with a video-streaming
prototype.Comment: 9 page
- β¦