3,897 research outputs found
Coded Load Balancing in Cache Networks
We consider load balancing problem in a cache network consisting of
storage-enabled servers forming a distributed content delivery scenario.
Previously proposed load balancing solutions cannot perfectly balance out
requests among servers, which is a critical issue in practical networks.
Therefore, in this paper, we investigate a coded cache content placement where
coded chunks of original files are stored in servers based on the files
popularity distribution. In our scheme, upon each request arrival at the
delivery phase, by dispatching enough coded chunks to the request origin from
the nearest servers, the requested file can be decoded.
Here, we show that if requests arrive randomly at servers, the
proposed scheme results in the maximum load of in the network. This
result is shown to be valid under various assumptions for the underlying
network topology. Our results should be compared to the maximum load of two
baseline schemes, namely, nearest replica and power of two choices strategies,
which are and , respectively. This
finding shows that using coding, results in a considerable load balancing
performance improvement, without compromising communications cost performance.
This is confirmed by performing extensive simulation results, in non-asymptotic
regimes as well.Comment: The paper is 12 pages and contains 8 figure
Fundamental Limits of Stochastic Shared Caches Networks
The work establishes the exact performance limits of stochastic coded caching
when users share a bounded number of cache states, and when the association
between users and caches, is random. Under the premise that more balanced
user-to-cache associations perform better than unbalanced ones, our work
provides a statistical analysis of the average performance of such networks,
identifying in closed form, the exact optimal average delivery time. To
insightfully capture this delay, we derive easy to compute closed-form
analytical bounds that prove tight in the limit of a large number of
cache states. In the scenario where delivery involves users, we conclude
that the multiplicative performance deterioration due to randomness -- as
compared to the well-known deterministic uniform case -- can be unbounded and
can scale as at
, and that this scaling vanishes when
. To alleviate this adverse effect of
cache-load imbalance, we consider various load balancing methods, and show that
employing proximity-bounded load balancing with an ability to choose from
neighboring caches, the aforementioned scaling reduces to , while when
the proximity constraint is removed, the scaling is of a much slower order
. The above analysis is extensively
validated numerically.Comment: 40 pages, 12 figure
Cooperative Edge Caching in User-Centric Clustered Mobile Networks
With files proactively stored at base stations (BSs), mobile edge caching
enables direct content delivery without remote file fetching, which can reduce
the end-to-end delay while relieving backhaul pressure. To effectively utilize
the limited cache size in practice, cooperative caching can be leveraged to
exploit caching diversity, by allowing users served by multiple base stations
under the emerging user-centric network architecture. This paper explores
delay-optimal cooperative edge caching in large-scale user-centric mobile
networks, where the content placement and cluster size are optimized based on
the stochastic information of network topology, traffic distribution, channel
quality, and file popularity. Specifically, a greedy content placement
algorithm is proposed based on the optimal bandwidth allocation, which can
achieve (1-1/e)-optimality with linear computational complexity. In addition,
the optimal user-centric cluster size is studied, and a condition constraining
the maximal cluster size is presented in explicit form, which reflects the
tradeoff between caching diversity and spectrum efficiency. Extensive
simulations are conducted for analysis validation and performance evaluation.
Numerical results demonstrate that the proposed greedy content placement
algorithm can reduce the average file transmission delay up to 50% compared
with the non-cooperative and hit-ratio-maximal schemes. Furthermore, the
optimal clustering is also discussed considering the influences of different
system parameters.Comment: IEEE TM
Cache-Aided Interference Channels
Over the past decade, the bulk of wireless traffic has shifted from speech to
content. This shift creates the opportunity to cache part of the content in
memories closer to the end users, for example in base stations. Most of the
prior literature focuses on the reduction of load in the backhaul and core
networks due to caching, i.e., on the benefits caching offers for the wireline
communication link between the origin server and the caches. In this paper, we
are instead interested in the benefits caching can offer for the wireless
communication link between the caches and the end users.
To quantify the gains of caching for this wireless link, we consider an
interference channel in which each transmitter is equipped with an isolated
cache memory. Communication takes place in two phases, a content placement
phase followed by a content delivery phase. The objective is to design both the
placement and the delivery phases to maximize the rate in the delivery phase in
response to any possible user demands. Focusing on the three-user case, we show
that through careful joint design of these phases, we can reap three distinct
benefits from caching: a load balancing gain, an interference cancellation
gain, and an interference alignment gain. In our proposed scheme, load
balancing is achieved through a specific file splitting and placement,
producing a particular pattern of content overlap at the caches. This overlap
allows to implement interference cancellation. Further, it allows us to create
several virtual transmitters, each transmitting a part of the requested
content, which increases interference-alignment possibilities.Comment: 17 pages, Presented in Part in ISIT 201
Survey of Search and Replication Schemes in Unstructured P2P Networks
P2P computing lifts taxing issues in various areas of computer science. The
largely used decentralized unstructured P2P systems are ad hoc in nature and
present a number of research challenges. In this paper, we provide a
comprehensive theoretical survey of various state-of-the-art search and
replication schemes in unstructured P2P networks for file-sharing applications.
The classifications of search and replication techniques and their advantages
and disadvantages are briefly explained. Finally, the various issues on
searching and replication for unstructured P2P networks are discussed.Comment: 39 Pages 5 Figure
Caching in Combination Networks: A Novel Delivery by Leveraging the Network Topology
Maddah-Ali and Niesen (MAN) in 2014 surprisingly showed that it is possible
to serve an arbitrarily large number of cache-equipped users with a constant
number of transmissions by using coded caching in shared-link broadcast
networks. This paper studies the tradeoff between the user's cache size and the
file download time for combination networks, where users with caches
communicate with the servers through intermediate relays. Motivated by the
so-called separation approach, it is assumed that placement and multicast
message generation are done according to the MAN original scheme and regardless
of the network topology. The main contribution of this paper is the design of a
novel two-phase delivery scheme that, accounting to the network topology,
outperforms schemes available in the literature. The key idea is to create
additional (compared to MAN) multicasting opportunities: in the first phase
coded messages are sent with the goal of increasing the amount of `side
information' at the users, which is then leveraged during the second phase. The
download time with the novel scheme is shown to be proportional to 1=H (with H
being the number or relays) and to be order optimal under the constraint of
uncoded placement for some parameter regimes.Comment: 5 pages, 2 figures, submitted to ISIT 201
Storage, Communication, and Load Balancing Trade-off in Distributed Cache Networks
We consider load balancing in a network of caching servers delivering
contents to end users. Randomized load balancing via the so-called power of two
choices is a well-known approach in parallel and distributed systems. In this
framework, we investigate the tension between storage resources, communication
cost, and load balancing performance. To this end, we propose a randomized load
balancing scheme which simultaneously considers cache size limitation and
proximity in the server redirection process.
In contrast to the classical power of two choices setup, since the memory
limitation and the proximity constraint cause correlation in the server
selection process, we may not benefit from the power of two choices. However,
we prove that in certain regimes of problem parameters, our scheme results in
the maximum load of order (here is the network size).
This is an exponential improvement compared to the scheme which assigns each
request to the nearest available replica. Interestingly, the extra
communication cost incurred by our proposed scheme, compared to the nearest
replica strategy, is small. Furthermore, our extensive simulations show that
the trade-off trend does not depend on the network topology and library
popularity profile details.Comment: This is the journal version of our earlier work [arXiv:1610.05961]
presented at International Parallel & Distributed Processing Symposium
(IPDPS), 2017. This manuscript is 15 pages and contains 15 figure
A Novel Communication Cost Aware Load Balancing in Content Delivery Networks using Honeybee Algorithm
Modern web services rely on Content Delivery Networks (CDNs) to efficiently
deliver contents to end users. In order to minimize the experienced
communication cost, it is necessary to send the end user's requests to the
nearest servers. However, it is shown that this naive method causes some
servers to get overloaded. Similarly, when distributing the requests to avoid
overloading, the communication cost increases. This is a well-known trade-off
between communication cost and load balancing in CDNs.
In this work, by introducing a new meta-heuristic algorithm, we try to
optimize this trade-off, that is, to have less-loaded servers at lower
experienced communication cost. This trade-off is even better managed when we
optimize the way servers update their information of each others' load. The
proposed scheme, which is based on Honeybee algorithm, is an implementation of
bees algorithm which is known for solving continuous optimization problems. Our
proposed version for CDNs is a combination of a request redirecting method and
a server information update algorithm.
To evaluate the suggested method in a large-scale network, we leveraged our
newly developed CDN simulator which takes into account all the important
network parameters in the scope of our problem. The simulation results show
that our proposed scheme achieves a better trade-off between the communication
cost and load balancing in CDNs, compared to previously proposed schemes
Paging with Multiple Caches
Modern content delivery networks consist of one or more back-end servers
which store the entire content catalog, assisted by multiple front-end servers
with limited storage and service capacities located near the end-users.
Appropriate replication of content on the front-end servers is key to maximize
the fraction of requests served by the front-end servers. Motivated by this, a
multiple cache variant of the classical single cache paging problem is studied,
which is referred to as the Multiple Cache Paging (MCP) problem. In each
time-slot, a batch of content requests arrive that have to be served by a bank
of caches, and each cache can serve exactly one request. If a content is not
found in the bank, it is fetched from the back-end server, and one currently
stored content is ejected, and counted as fault. As in the classical paging
problem, the goal is to minimize the total number of faults. The competitive
ratio of any online algorithm for the MCP problem is shown to be unbounded for
arbitrary input, thus concluding that the MCP problem is fundamentally
different from the classical paging problem. Consequently, stochastic arrivals
setting is considered, where requests arrive according to a known/unknown
stochastic process. It is shown that near optimal performance can be achieved
with simple policies that require no co-ordination across the caches
Improved Approximation of Storage-Rate Tradeoff for Caching with Multiple Demands
Caching at the network edge has emerged as a viable solution for alleviating
the severe capacity crunch in modern content centric wireless networks by
leveraging network load-balancing in the form of localized content storage and
delivery. In this work, we consider a cache-aided network where the cache
storage phase is assisted by a central server and users can demand multiple
files at each transmission interval. To service these demands, we consider two
delivery models - centralized content delivery where user demands at each
transmission interval are serviced by the central server via multicast
transmissions; and device-to-device (D2D) assisted distributed delivery
where users multicast to each other in order to service file demands. For such
cache-aided networks, we present new results on the fundamental cache storage
vs. transmission rate tradeoff. Specifically, we develop a new technique for
characterizing information theoretic lower bounds on the storage-rate tradeoff
and show that the new lower bounds are strictly tighter than cut-set bounds
from literature. Furthermore, using the new lower bounds, we establish the
optimal storage-rate tradeoff to within a constant multiplicative gap. We show
that, for multiple demands per user, achievable schemes based on repetition of
schemes for single demands are order-optimal under both delivery models.Comment: Extended version of a submission to IEEE Trans. on Communication
- …