6,583 research outputs found
A Low-Complexity Approach to Distributed Cooperative Caching with Geographic Constraints
We consider caching in cellular networks in which each base station is
equipped with a cache that can store a limited number of files. The popularity
of the files is known and the goal is to place files in the caches such that
the probability that a user at an arbitrary location in the plane will find the
file that she requires in one of the covering caches is maximized.
We develop distributed asynchronous algorithms for deciding which contents to
store in which cache. Such cooperative algorithms require communication only
between caches with overlapping coverage areas and can operate in asynchronous
manner. The development of the algorithms is principally based on an
observation that the problem can be viewed as a potential game. Our basic
algorithm is derived from the best response dynamics. We demonstrate that the
complexity of each best response step is independent of the number of files,
linear in the cache capacity and linear in the maximum number of base stations
that cover a certain area. Then, we show that the overall algorithm complexity
for a discrete cache placement is polynomial in both network size and catalog
size. In practical examples, the algorithm converges in just a few iterations.
Also, in most cases of interest, the basic algorithm finds the best Nash
equilibrium corresponding to the global optimum. We provide two extensions of
our basic algorithm based on stochastic and deterministic simulated annealing
which find the global optimum.
Finally, we demonstrate the hit probability evolution on real and synthetic
networks numerically and show that our distributed caching algorithm performs
significantly better than storing the most popular content, probabilistic
content placement policy and Multi-LRU caching policies.Comment: 24 pages, 9 figures, presented at SIGMETRICS'1
A Literature Survey of Cooperative Caching in Content Distribution Networks
Content distribution networks (CDNs) which serve to deliver web objects
(e.g., documents, applications, music and video, etc.) have seen tremendous
growth since its emergence. To minimize the retrieving delay experienced by a
user with a request for a web object, caching strategies are often applied -
contents are replicated at edges of the network which is closer to the user
such that the network distance between the user and the object is reduced. In
this literature survey, evolution of caching is studied. A recent research
paper [15] in the field of large-scale caching for CDN was chosen to be the
anchor paper which serves as a guide to the topic. Research studies after and
relevant to the anchor paper are also analyzed to better evaluate the
statements and results of the anchor paper and more importantly, to obtain an
unbiased view of the large scale collaborate caching systems as a whole.Comment: 5 pages, 5 figure
Modeling and Analysis of Content Caching in Wireless Small Cell Networks
Network densification with small cell base stations is a promising solution
to satisfy future data traffic demands. However, increasing small cell base
station density alone does not ensure better users quality-of-experience and
incurs high operational expenditures. Therefore, content caching on different
network elements has been proposed as a mean of offloading he backhaul by
caching strategic contents at the network edge, thereby reducing latency. In
this paper, we investigate cache-enabled small cells in which we model and
characterize the outage probability, defined as the probability of not
satisfying users requests over a given coverage area. We analytically derive a
closed form expression of the outage probability as a function of
signal-to-interference ratio, cache size, small cell base station density and
threshold distance. By assuming the distribution of base stations as a Poisson
point process, we derive the probability of finding a specific content within a
threshold distance and the optimal small cell base station density that
achieves a given target cache hit probability. Furthermore, simulation results
are performed to validate the analytical model.Comment: accepted for publication, IEEE ISWCS 201
On the Interplay Between Edge Caching and HARQ in Fog-RAN
In a Fog Radio Access Network (Fog-RAN), edge caching is combined with
cloud-aided transmission in order to compensate for the limited hit probability
of the caches at the base stations (BSs). Unlike the typical wired scenarios
studied in the networking literature in which entire files are typically
cached, recent research has suggested that fractional caching at the BSs of a
wireless system can be beneficial. This paper investigates the benefits of
fractional caching in a scenario with a cloud processor connected via a
wireless fronthaul link to a BS, which serves a number of mobile users on a
wireless downlink channel using orthogonal spectral resources. The fronthaul
and downlink channels occupy orthogonal frequency bands. The end-to-end
delivery latency for given requests of the users depends on the HARQ processes
run on the two links to counteract fading-induced outages. An analytical
framework based on theory of Markov chains with rewards is provided that
enables the optimization of fractional edge caching at the BSs. Numerical
results demonstrate meaningful advantages for fractional caching due to the
interplay between caching and HARQ transmission. The gains are observed in the
typical case in which the performance is limited by the wireless downlink
channel and the file popularity distribution is not too skewed
Cooperative Local Caching under Heterogeneous File Preferences
Local caching is an effective scheme for leveraging the memory of the mobile
terminal (MT) and short range communications to save the bandwidth usage and
reduce the download delay in the cellular communication system. Specifically,
the MTs first cache in their local memories in off-peak hours and then exchange
the requested files with each other in the vicinity during peak hours. However,
prior works largely overlook MTs' heterogeneity in file preferences and their
selfish behaviours. In this paper, we practically categorize the MTs into
different interest groups according to the MTs' preferences. Each group of MTs
aims to increase the probability of successful file discovery from the
neighbouring MTs (from the same or different groups). Hence, we define the
groups' utilities as the probability of successfully discovering the file in
the neighbouring MTs, which should be maximized by deciding the caching
strategies of different groups. By modelling MTs' mobilities as homogeneous
Poisson point processes (HPPPs), we analytically characterize MTs' utilities in
closed-form. We first consider the fully cooperative case where a centralizer
helps all groups to make caching decisions. We formulate the problem as a
weighted-sum utility maximization problem, through which the maximum utility
trade-offs of different groups are characterized. Next, we study two benchmark
cases under selfish caching, namely, partial and no cooperation, with and
without inter-group file sharing, respectively. The optimal caching
distributions for these two cases are derived. Finally, numerical examples are
presented to compare the utilities under different cases and show the
effectiveness of the fully cooperative local caching compared to the two
benchmark cases
Proactive Caching for Energy-Efficiency in Wireless Networks: A Markov Decision Process Approach
Content caching in wireless networks provides a substantial opportunity to
trade off low cost memory storage with energy consumption, yet finding the
optimal causal policy with low computational complexity remains a challenge.
This paper models the Joint Pushing and Caching (JPC) problem as a Markov
Decision Process (MDP) and provides a solution to determine the optimal
randomized policy. A novel approach to decouple the influence from buffer
occupancy and user requests is proposed to turn the high-dimensional
optimization problem into three low-dimensional ones. Furthermore, a
non-iterative algorithm to solve one of the sub-problems is presented,
exploiting a structural property we found as \textit{generalized monotonicity},
and hence significantly reduces the computational complexity. The result
attains close performance in comparison with theoretical bounds from
non-practical policies, while benefiting from higher time efficiency than the
unadapted MDP solution.Comment: 6 pages, 6 figures, submitted to IEEE International Conference on
Communications 201
- …