470 research outputs found
Speeding up Future Video Distribution via Channel-Aware Caching-Aided Coded Multicast
Future Internet usage will be dominated by the consumption of a rich variety
of online multimedia services accessed from an exponentially growing number of
multimedia capable mobile devices. As such, future Internet designs will be
challenged to provide solutions that can deliver bandwidth-intensive,
delay-sensitive, on-demand video-based services over increasingly crowded,
bandwidth-limited wireless access networks. One of the main reasons for the
bandwidth stress facing wireless network operators is the difficulty to exploit
the multicast nature of the wireless medium when wireless users or access
points rarely experience the same channel conditions or access the same content
at the same time. In this paper, we present and analyze a novel wireless video
delivery paradigm based on the combined use of channel-aware caching and coded
multicasting that allows simultaneously serving multiple cache-enabled
receivers that may be requesting different content and experiencing different
channel conditions. To this end, we reformulate the caching-aided coded
multicast problem as a joint source-channel coding problem and design an
achievable scheme that preserves the cache-enabled multiplicative throughput
gains of the error-free scenario,by guaranteeing per-receiver rates unaffected
by the presence of receivers with worse channel conditions.Comment: 11 pages,6 figures,to appear in IEEE JSAC Special Issue on Video
Distribution over Future Interne
Updating Content in Cache-Aided Coded Multicast
Motivated by applications to delivery of dynamically updated, but correlated
data in settings such as content distribution networks, and distributed file
sharing systems, we study a single source multiple destination network coded
multicast problem in a cache-aided network. We focus on models where the caches
are primarily located near the destinations, and where the source has no cache.
The source observes a sequence of correlated frames, and is expected to do
frame-by-frame encoding with no access to prior frames. We present a novel
scheme that shows how the caches can be advantageously used to decrease the
overall cost of multicast, even though the source encodes without access to
past data. Our cache design and update scheme works with any choice of network
code designed for a corresponding cache-less network, is largely decentralized,
and works for an arbitrary network. We study a convex relation of the
optimization problem that results form the overall cost function. The results
of the optimization problem determines the rate allocation and caching
strategies. Numerous simulation results are presented to substantiate the
theory developed.Comment: To Appear in IEEE Journal on Selected Areas in Communications:
Special Issue on Caching for Communication Systems and Network
Fundamental Limits of Caching in Wireless D2D Networks
We consider a wireless Device-to-Device (D2D) network where communication is
restricted to be single-hop. Users make arbitrary requests from a finite
library of files and have pre-cached information on their devices, subject to a
per-node storage capacity constraint. A similar problem has already been
considered in an ``infrastructure'' setting, where all users receive a common
multicast (coded) message from a single omniscient server (e.g., a base station
having all the files in the library) through a shared bottleneck link. In this
work, we consider a D2D ``infrastructure-less'' version of the problem. We
propose a caching strategy based on deterministic assignment of subpackets of
the library files, and a coded delivery strategy where the users send linearly
coded messages to each other in order to collectively satisfy their demands. We
also consider a random caching strategy, which is more suitable to a fully
decentralized implementation. Under certain conditions, both approaches can
achieve the information theoretic outer bound within a constant multiplicative
factor. In our previous work, we showed that a caching D2D wireless network
with one-hop communication, random caching, and uncoded delivery, achieves the
same throughput scaling law of the infrastructure-based coded multicasting
scheme, in the regime of large number of users and files in the library. This
shows that the spatial reuse gain of the D2D network is order-equivalent to the
coded multicasting gain of single base station transmission. It is therefore
natural to ask whether these two gains are cumulative, i.e.,if a D2D network
with both local communication (spatial reuse) and coded multicasting can
provide an improved scaling law. Somewhat counterintuitively, we show that
these gains do not cumulate (in terms of throughput scaling law).Comment: 45 pages, 5 figures, Submitted to IEEE Transactions on Information
Theory, This is the extended version of the conference (ITW) paper
arXiv:1304.585
Fundamental Limits of Wireless Caching Under Mixed Cacheable and Uncacheable Traffic
We consider cache-aided wireless communication scenarios where each user
requests both a file from an a-priori generated cacheable library (referred to
as 'content'), and an uncacheable 'non-content' message generated at the start
of the wireless transmission session. This scenario is easily found in
real-world wireless networks, where the two types of traffic coexist and share
limited radio resources. We focus on single-transmitter, single-antenna
wireless networks with cache-aided receivers, where the wireless channel is
modelled by a degraded Gaussian broadcast channel (GBC). For this setting, we
study the delay-rate trade-off, which characterizes the content delivery time
and non-content communication rates that can be achieved simultaneously. We
propose a scheme based on the separation principle, which isolates the coded
caching and multicasting problem from the physical layer transmission problem.
We show that this separation-based scheme is sufficient for achieving an
information-theoretically order optimal performance, up to a multiplicative
factor of 2.01 for the content delivery time, when working in the generalized
degrees of freedom (GDoF) limit. We further show that the achievable
performance is near-optimal after relaxing the GDoF limit, up to an additional
additive factor of 2 bits per dimension for the non-content rates. A key
insight emerging from our scheme is that in some scenarios considerable amounts
of non-content traffic can be communicated while maintaining the minimum
content delivery time, achieved in the absence of non-content messages;
compliments of 'topological holes' arising from asymmetries in wireless channel
gains.Comment: Accepted for publication in the IEEE Transactions on Information
Theor
On the Average Performance of Caching and Coded Multicasting with Random Demands
For a network with one sender, receivers (users) and possible
messages (files), caching side information at the users allows to satisfy
arbitrary simultaneous demands by sending a common (multicast) coded message.
In the worst-case demand setting, explicit deterministic and random caching
strategies and explicit linear coding schemes have been shown to be order
optimal. In this work, we consider the same scenario where the user demands are
random i.i.d., according to a Zipf popularity distribution. In this case, we
pose the problem in terms of the minimum average number of equivalent message
transmissions. We present a novel decentralized random caching placement and a
coded delivery scheme which are shown to achieve order-optimal performance. As
a matter of fact, this is the first order-optimal result for the caching and
coded multicasting problem in the case of random demands.Comment: 5 pages, 3 figure, to appear in ISWCS 201
- …