44 research outputs found
Coded Caching for Delay-Sensitive Content
Coded caching is a recently proposed technique that achieves significant
performance gains for cache networks compared to uncoded caching schemes.
However, this substantial coding gain is attained at the cost of large delivery
delay, which is not tolerable in delay-sensitive applications such as video
streaming. In this paper, we identify and investigate the tradeoff between the
performance gain of coded caching and the delivery delay. We propose a
computationally efficient caching algorithm that provides the gains of coding
and respects delay constraints. The proposed algorithm achieves the optimum
performance for large delay, but still offers major gains for small delay.
These gains are demonstrated in a practical setting with a video-streaming
prototype.Comment: 9 page
Cost-aware caching: optimizing cache provisioning and object placement in ICN
Caching is frequently used by Internet Service Providers as a viable
technique to reduce the latency perceived by end users, while jointly
offloading network traffic. While the cache hit-ratio is generally considered
in the literature as the dominant performance metric for such type of systems,
in this paper we argue that a critical missing piece has so far been neglected.
Adopting a radically different perspective, in this paper we explicitly account
for the cost of content retrieval, i.e. the cost associated to the external
bandwidth needed by an ISP to retrieve the contents requested by its customers.
Interestingly, we discover that classical cache provisioning techniques that
maximize cache efficiency (i.e., the hit-ratio), lead to suboptimal solutions
with higher overall cost. To show this mismatch, we propose two optimization
models that either minimize the overall costs or maximize the hit-ratio,
jointly providing cache sizing, object placement and path selection. We
formulate a polynomial-time greedy algorithm to solve the two problems and
analytically prove its optimality. We provide numerical results and show that
significant cost savings are attainable via a cost-aware design
Joint Service Placement and Request Routing in Multi-cell Mobile Edge Computing Networks
The proliferation of innovative mobile services such as augmented reality,
networked gaming, and autonomous driving has spurred a growing need for
low-latency access to computing resources that cannot be met solely by existing
centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an
effective solution to meet the demand for low-latency services by enabling the
execution of computing tasks at the network-periphery, in proximity to
end-users. While a number of recent studies have addressed the problem of
determining the execution of service tasks and the routing of user requests to
corresponding edge servers, the focus has primarily been on the efficient
utilization of computing resources, neglecting the fact that non-trivial
amounts of data need to be stored to enable service execution, and that many
emerging services exhibit asymmetric bandwidth requirements. To fill this gap,
we study the joint optimization of service placement and request routing in
MEC-enabled multi-cell networks with multidimensional
(storage-computation-communication) constraints. We show that this problem
generalizes several problems in literature and propose an algorithm that
achieves close-to-optimal performance using randomized rounding. Evaluation
results demonstrate that our approach can effectively utilize the available
resources to maximize the number of requests served by low-latency edge cloud
servers.Comment: IEEE Infocom 201
Proactive multi-tenant cache management for virtualized ISP networks
The content delivery market has mainly been dominated by large Content Delivery Networks (CDNs) such as Akamai and Limelight. However, CDN traffic exerts a lot of pressure on Internet Service Provider (ISP) networks. Recently, ISPs have begun deploying so-called Telco CDNs, which have many advantages, such as reduced ISP network bandwidth utilization and improved Quality of Service (QoS) by bringing content closer to the end-user. Virtualization of storage and networking resources can enable the ISP to simultaneously lease its Telco CDN infrastructure to multiple third parties, opening up new business models and revenue streams. In this paper, we propose a proactive cache management system for ISP-operated multi-tenant Telco CDNs. The associated algorithm optimizes content placement and server selection across tenants and users, based on predicted content popularity and the geographical distribution of requests. Based on a Video-on-Demand (VoD) request trace of a leading European telecom operator, the presented algorithm is shown to reduce bandwidth usage by 17% compared to the traditional Least Recently Used (LRU) caching strategy, both inside the network and on the ingress links, while at the same time offering enhanced load balancing capabilities. Increasing the prediction accuracy is shown to have the potential to further improve bandwidth efficiency by up to 79%
Fundamental Limits of Caching
Caching is a technique to reduce peak traffic rates by prefetching popular
content into memories at the end users. Conventionally, these memories are used
to deliver requested content in part from a locally cached copy rather than
through the network. The gain offered by this approach, which we term local
caching gain, depends on the local cache size (i.e, the memory available at
each individual user). In this paper, we introduce and exploit a second,
global, caching gain not utilized by conventional caching schemes. This gain
depends on the aggregate global cache size (i.e., the cumulative memory
available at all users), even though there is no cooperation among the users.
To evaluate and isolate these two gains, we introduce an
information-theoretic formulation of the caching problem focusing on its basic
structure. For this setting, we propose a novel coded caching scheme that
exploits both local and global caching gains, leading to a multiplicative
improvement in the peak rate compared to previously known schemes. In
particular, the improvement can be on the order of the number of users in the
network. Moreover, we argue that the performance of the proposed scheme is
within a constant factor of the information-theoretic optimum for all values of
the problem parameters.Comment: To appear in IEEE Transactions on Information Theor
Efficient Proactive Caching for Supporting Seamless Mobility
We present a distributed proactive caching approach that exploits user
mobility information to decide where to proactively cache data to support
seamless mobility, while efficiently utilizing cache storage using a congestion
pricing scheme. The proposed approach is applicable to the case where objects
have different sizes and to a two-level cache hierarchy, for both of which the
proactive caching problem is hard. Additionally, our modeling framework
considers the case where the delay is independent of the requested data object
size and the case where the delay is a function of the object size. Our
evaluation results show how various system parameters influence the delay gains
of the proposed approach, which achieves robust and good performance relative
to an oracle and an optimal scheme for a flat cache structure.Comment: 10 pages, 9 figure
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
Replicating or caching popular content in memories distributed across the
network is a technique to reduce peak network loads. Conventionally, the main
performance gain of this caching was thought to result from making part of the
requested data available closer to end users. Instead, we recently showed that
a much more significant gain can be achieved by using caches to create
coded-multicasting opportunities, even for users with different demands,
through coding across data streams. These coded-multicasting opportunities are
enabled by careful content overlap at the various caches in the network,
created by a central coordinating server.
In many scenarios, such a central coordinating server may not be available,
raising the question if this multicasting gain can still be achieved in a more
decentralized setting. In this paper, we propose an efficient caching scheme,
in which the content placement is performed in a decentralized manner. In other
words, no coordination is required for the content placement. Despite this lack
of coordination, the proposed scheme is nevertheless able to create
coded-multicasting opportunities and achieves a rate close to the optimal
centralized scheme.Comment: To appear in IEEE/ACM Transactions on Networkin