3,491 research outputs found
Cost-Effective Cache Deployment in Mobile Heterogeneous Networks
This paper investigates one of the fundamental issues in cache-enabled
heterogeneous networks (HetNets): how many cache instances should be deployed
at different base stations, in order to provide guaranteed service in a
cost-effective manner. Specifically, we consider two-tier HetNets with
hierarchical caching, where the most popular files are cached at small cell
base stations (SBSs) while the less popular ones are cached at macro base
stations (MBSs). For a given network cache deployment budget, the cache sizes
for MBSs and SBSs are optimized to maximize network capacity while satisfying
the file transmission rate requirements. As cache sizes of MBSs and SBSs affect
the traffic load distribution, inter-tier traffic steering is also employed for
load balancing. Based on stochastic geometry analysis, the optimal cache sizes
for MBSs and SBSs are obtained, which are threshold-based with respect to cache
budget in the networks constrained by SBS backhauls. Simulation results are
provided to evaluate the proposed schemes and demonstrate the applications in
cost-effective network deployment
Soft Cache Hits and the Impact of Alternative Content Recommendations on Mobile Edge Caching
Caching popular content at the edge of future mobile networks has been widely
considered in order to alleviate the impact of the data tsunami on both the
access and backhaul networks. A number of interesting techniques have been
proposed, including femto-caching and "delayed" or opportunistic cache access.
Nevertheless, the majority of these approaches suffer from the rather limited
storage capacity of the edge caches, compared to the tremendous and rapidly
increasing size of the Internet content catalog. We propose to depart from the
assumption of hard cache misses, common in most existing works, and consider
"soft" cache misses, where if the original content is not available, an
alternative content that is locally cached can be recommended. Given that
Internet content consumption is increasingly entertainment-oriented, we believe
that a related content could often lead to complete or at least partial user
satisfaction, without the need to retrieve the original content over expensive
links. In this paper, we formulate the problem of optimal edge caching with
soft cache hits, in the context of delayed access, and analyze the expected
gains. We then show using synthetic and real datasets of related video contents
that promising caching gains could be achieved in practice
Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching
Proactive caching is an effective way to alleviate peak-hour traffic
congestion by prefetching popular contents at the wireless network edge. To
maximize the caching efficiency requires the knowledge of content popularity
profile, which however is often unavailable in advance. In this paper, we first
propose a new linear prediction model, named grouped linear model (GLM) to
estimate the future content requests based on historical data. Unlike many
existing works that assumed the static content popularity profile, our model
can adapt to the temporal variation of the content popularity in practical
systems due to the arrival of new contents and dynamics of user preference.
Based on the predicted content requests, we then propose a reinforcement
learning approach with model-free acceleration (RLMA) for online cache
replacement by taking into account both the cache hits and replacement cost.
This approach accelerates the learning process in non-stationary environment by
generating imaginary samples for Q-value updates. Numerical results based on
real-world traces show that the proposed prediction and learning based online
caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho
- …