18 research outputs found
Caching and Coded Multicasting: Multiple Groupcast Index Coding
The capacity of caching networks has received considerable attention in the
past few years. A particularly studied setting is the case of a single server
(e.g., a base station) and multiple users, each of which caches segments of
files in a finite library. Each user requests one (whole) file in the library
and the server sends a common coded multicast message to satisfy all users at
once. The problem consists of finding the smallest possible codeword length to
satisfy such requests. In this paper we consider the generalization to the case
where each user places requests. The obvious naive scheme consists
of applying times the order-optimal scheme for a single request, obtaining
a linear in scaling of the multicast codeword length. We propose a new
achievable scheme based on multiple groupcast index coding that achieves a
significant gain over the naive scheme. Furthermore, through an information
theoretic converse we find that the proposed scheme is approximately optimal
within a constant factor of (at most) .Comment: 5 pages, 1 figure, to appear in GlobalSIP14, Dec. 201
On Caching with More Users than Files
Caching appears to be an efficient way to reduce peak hour network traffic
congestion by storing some content at the user's cache without knowledge of
later demands. Recently, Maddah-Ali and Niesen proposed a two-phase, placement
and delivery phase, coded caching strategy for centralized systems (where
coordination among users is possible in the placement phase), and for
decentralized systems. This paper investigates the same setup under the further
assumption that the number of users is larger than the number of files. By
using the same uncoded placement strategy of Maddah-Ali and Niesen, a novel
coded delivery strategy is proposed to profit from the multicasting
opportunities that arise because a file may be demanded by multiple users. The
proposed delivery method is proved to be optimal under the constraint of
uncoded placement for centralized systems with two files, moreover it is shown
to outperform known caching strategies for both centralized and decentralized
systems.Comment: 6 pages, 3 figures, submitted to ISIT 201
Correlation-Aware Distributed Caching and Coded Delivery
Cache-aided coded multicast leverages side information at wireless edge
caches to efficiently serve multiple groupcast demands via common multicast
transmissions, leading to load reductions that are proportional to the
aggregate cache size. However, the increasingly unpredictable and personalized
nature of the content that users consume challenges the efficiency of existing
caching-based solutions in which only exact content reuse is explored. This
paper generalizes the cache-aided coded multicast problem to a source
compression with distributed side information problem that specifically
accounts for the correlation among the content files. It is shown how joint
file compression during the caching and delivery phases can provide load
reductions that go beyond those achieved with existing schemes. This is
accomplished through a lower bound on the fundamental rate-memory trade-off as
well as a correlation-aware achievable scheme, shown to significantly
outperform state-of-the-art correlation-unaware solutions, while approaching
the limiting rate-memory trade-off.Comment: In proceeding of IEEE Information Theory Workshop (ITW), 201
Speeding up Future Video Distribution via Channel-Aware Caching-Aided Coded Multicast
Future Internet usage will be dominated by the consumption of a rich variety
of online multimedia services accessed from an exponentially growing number of
multimedia capable mobile devices. As such, future Internet designs will be
challenged to provide solutions that can deliver bandwidth-intensive,
delay-sensitive, on-demand video-based services over increasingly crowded,
bandwidth-limited wireless access networks. One of the main reasons for the
bandwidth stress facing wireless network operators is the difficulty to exploit
the multicast nature of the wireless medium when wireless users or access
points rarely experience the same channel conditions or access the same content
at the same time. In this paper, we present and analyze a novel wireless video
delivery paradigm based on the combined use of channel-aware caching and coded
multicasting that allows simultaneously serving multiple cache-enabled
receivers that may be requesting different content and experiencing different
channel conditions. To this end, we reformulate the caching-aided coded
multicast problem as a joint source-channel coding problem and design an
achievable scheme that preserves the cache-enabled multiplicative throughput
gains of the error-free scenario,by guaranteeing per-receiver rates unaffected
by the presence of receivers with worse channel conditions.Comment: 11 pages,6 figures,to appear in IEEE JSAC Special Issue on Video
Distribution over Future Interne
Distortion-Memory Tradeoffs in Cache-Aided Wireless Video Delivery
Mobile network operators are considering caching as one of the strategies to
keep up with the increasing demand for high-definition wireless video
streaming. By prefetching popular content into memory at wireless access points
or end user devices, requests can be served locally, relieving strain on
expensive backhaul. In addition, using network coding allows the simultaneous
serving of distinct cache misses via common coded multicast transmissions,
resulting in significantly larger load reductions compared to those achieved
with conventional delivery schemes. However, prior work does not exploit the
properties of video and simply treats content as fixed-size files that users
would like to fully download. Our work is motivated by the fact that video can
be coded in a scalable fashion and that the decoded video quality depends on
the number of layers a user is able to receive. Using a Gaussian source model,
caching and coded delivery methods are designed to minimize the squared error
distortion at end user devices. Our work is general enough to consider
heterogeneous cache sizes and video popularity distributions.Comment: To appear in Allerton 2015 Proceedings of the 53rd annual Allerton
conference on Communication, control, and computin
Cache-Aided Coded Multicast for Correlated Sources
The combination of edge caching and coded multicasting is a promising
approach to improve the efficiency of content delivery over cache-aided
networks. The global caching gain resulting from content overlap distributed
across the network in current solutions is limited due to the increasingly
personalized nature of the content consumed by users. In this paper, the
cache-aided coded multicast problem is generalized to account for the
correlation among the network content by formulating a source compression
problem with distributed side information. A correlation-aware achievable
scheme is proposed and an upper bound on its performance is derived. It is
shown that considerable load reductions can be achieved, compared to state of
the art correlation-unaware schemes, when caching and delivery phases
specifically account for the correlation among the content files.Comment: In proceeding of IEEE International Symposium on Turbo Codes and
Iterative Information Processing (ISTC), 201
Broadcast Caching Networks with Two Receivers and Multiple Correlated Sources
The correlation among the content distributed across a cache-aided broadcast
network can be exploited to reduce the delivery load on the shared wireless
link. This paper considers a two-user three-file network with correlated
content, and studies its fundamental limits for the worst-case demand. A class
of achievable schemes based on a two-step source coding approach is proposed.
Library files are first compressed using Gray-Wyner source coding, and then
cached and delivered using a combination of correlation-unaware cache-aided
coded multicast schemes. The second step is interesting in its own right and
considers a multiple-request caching problem, whose solution requires coding in
the placement phase. A lower bound on the optimal peak rate-memory trade-off is
derived, which is used to evaluate the performance of the proposed scheme. It
is shown that for symmetric sources the two-step strategy achieves the lower
bound for large cache capacities, and it is within half of the joint entropy of
two of the sources conditioned on the third source for all other cache sizes.Comment: in Proceedings of Asilomar Conference on Signals, Systems and
Computers, Pacific Grove, California, November 201
Optimization for Networks and Object Recognition
The present thesis explores two different application areas of combinatorial optimization, the work presented, indeed, is two fold, since it deals with two distinct problems, one related to data transfer in networks and the other to object recognition. Caching is an essential technique to improve throughput and latency in a vast variety of applications. The core idea is to duplicate content in memories distributed across the network, which can then be exploited to deliver requested content with less congestion and delay. In particular, it has been shown that the use of caching together with smart offloading strategies in a RAN composed of evolved NodeBs (eNBs), AP (e.g., WiFi), and UEs, can significantly reduce the backhaul traffic and service latency. The traditional role of cache memories is to deliver the maximal amount of requested content locally rather than from a remote server. While this approach is optimal for single-cache systems, it has recently been shown to be, in general, significantly suboptimal for systems with multiple caches (i.e., cache networks) since it allows only additive caching gain, while instead, cache memories should be used to enable a multiplicative caching gain. Recent studies have shown that storing different portions of the content across the wireless network caches and capitalizing on the spatial reuse of device-to-device (D2D) communications, or exploiting globally cached information in order to multicast coded messages simultaneously useful to a large number of users, enables a global caching gain. We focus on the case of a single server (e.g., a base station) and multiple users, each of which caches segments of files in a finite library. Each user requests one (whole) file in the library and the server sends a common coded multicast message to satisfy all users at once. The problem consists of finding the smallest possible codeword length to satisfy such requests. To solve this problem we present two achievable caching and coded delivery scheme, and one correlation-aware caching scheme, each of them is based on a heuristic polynomial-time coloring algorithm. Automatic object recognition has become, over the last decades, a central toping the in the artificial intelligence research, with a a significant burt over the last new year with the advent of the deep learning paradigm. In this context, the objective of the work discussed in the last two chapter of this thesis is an attempt at improving the performance of a natural images classifier introducing in the loop knowledge coming from the real world, expressed in terms of probability of a set of spatial relations between the objects in the images. In different words, the framework presented in this work aims at integrating the output of standard classifiers on different image parts with some domain knowledge, encoded in a probabilistic ontology