25 research outputs found
Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching
Proactive caching is an effective way to alleviate peak-hour traffic
congestion by prefetching popular contents at the wireless network edge. To
maximize the caching efficiency requires the knowledge of content popularity
profile, which however is often unavailable in advance. In this paper, we first
propose a new linear prediction model, named grouped linear model (GLM) to
estimate the future content requests based on historical data. Unlike many
existing works that assumed the static content popularity profile, our model
can adapt to the temporal variation of the content popularity in practical
systems due to the arrival of new contents and dynamics of user preference.
Based on the predicted content requests, we then propose a reinforcement
learning approach with model-free acceleration (RLMA) for online cache
replacement by taking into account both the cache hits and replacement cost.
This approach accelerates the learning process in non-stationary environment by
generating imaginary samples for Q-value updates. Numerical results based on
real-world traces show that the proposed prediction and learning based online
caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho
Caching with Unknown Popularity Profiles in Small Cell Networks
A heterogenous network is considered where the base stations (BSs), small
base stations (SBSs) and users are distributed according to independent Poisson
point processes (PPPs). We let the SBS nodes to posses high storage capacity
and are assumed to form a distributed caching network. Popular data files are
stored in the local cache of SBS, so that users can download the desired files
from one of the SBS in the vicinity subject to availability. The
offloading-loss is captured via a cost function that depends on a random
caching strategy proposed in this paper. The cost function depends on the
popularity profile, which is, in general, unknown. In this work, the popularity
profile is estimated at the BS using the available instantaneous demands from
the users in a time interval . This is then used to find an estimate
of the cost function from which the optimal random caching strategy is devised.
The main results of this work are the following: First it is shown that the
waiting time to achieve an difference between the achieved
and optimal costs is finite, provided the user density is greater than a
predefined threshold. In this case, is shown to scale as , where
is the support of the popularity profile. Secondly, a transfer
learning-based approach is proposed to obtain an estimate of the popularity
profile used to compute the empirical cost function. A condition is derived
under which the proposed transfer learning-based approach performs better than
the random caching strategy.Comment: 6 pages, Proceedings of IEEE Global Communications Conference, 201
A Learning-Based Approach to Caching in Heterogenous Small Cell Networks
A heterogenous network with base stations (BSs), small base stations (SBSs)
and users distributed according to independent Poisson point processes is
considered. SBS nodes are assumed to possess high storage capacity and to form
a distributed caching network. Popular files are stored in local caches of
SBSs, so that a user can download the desired files from one of the SBSs in its
vicinity. The offloading-loss is captured via a cost function that depends on
the random caching strategy proposed here. The popularity profile of cached
content is unknown and estimated using instantaneous demands from users within
a specified time interval. An estimate of the cost function is obtained from
which an optimal random caching strategy is devised. The training time to
achieve an difference between the achieved and optimal costs is
finite provided the user density is greater than a predefined threshold, and
scales as , where is the support of the popularity profile. A transfer
learning-based approach to improve this estimate is proposed. The training time
is reduced when the popularity profile is modeled using a parametric family of
distributions; the delay is independent of and scales linearly with the
dimension of the distribution parameter.Comment: 12 pages, 5 figures, published in IEEE Transactions on
Communications, 2016. arXiv admin note: text overlap with arXiv:1504.0363
Energy Efficiency in Cache Enabled Small Cell Networks With Adaptive User Clustering
Using a network of cache enabled small cells, traffic during peak hours can
be reduced considerably through proactively fetching the content that is most
probable to be requested. In this paper, we aim at exploring the impact of
proactive caching on an important metric for future generation networks,
namely, energy efficiency (EE). We argue that, exploiting the correlation in
user content popularity profiles in addition to the spatial repartitions of
users with comparable request patterns, can result in considerably improving
the achievable energy efficiency of the network. In this paper, the problem of
optimizing EE is decoupled into two related subproblems. The first one
addresses the issue of content popularity modeling. While most existing works
assume similar popularity profiles for all users in the network, we consider an
alternative caching framework in which, users are clustered according to their
content popularity profiles. In order to showcase the utility of the proposed
clustering scheme, we use a statistical model selection criterion, namely
Akaike information criterion (AIC). Using stochastic geometry, we derive a
closed-form expression of the achievable EE and we find the optimal active
small cell density vector that maximizes it. The second subproblem investigates
the impact of exploiting the spatial repartitions of users with comparable
request patterns. After considering a snapshot of the network, we formulate a
combinatorial optimization problem that enables to optimize content placement
such that the used transmission power is minimized. Numerical results show that
the clustering scheme enable to considerably improve the cache hit probability
and consequently the EE compared with an unclustered approach. Simulations also
show that the small base station allocation algorithm results in improving the
energy efficiency and hit probability.Comment: 30 pages, 5 figures, submitted to Transactions on Wireless
Communications (15-Dec-2016
Coded Caching for a Large Number Of Users
Information theoretic analysis of a coded caching system is considered, in
which a server with a database of N equal-size files, each F bits long, serves
K users. Each user is assumed to have a local cache that can store M files,
i.e., capacity of MF bits. Proactive caching to user terminals is considered,
in which the caches are filled by the server in advance during the placement
phase, without knowing the user requests. Each user requests a single file, and
all the requests are satisfied simultaneously through a shared error-free link
during the delivery phase.
First, centralized coded caching is studied assuming both the number and the
identity of the active users in the delivery phase are known by the server
during the placement phase. A novel group-based centralized coded caching (GBC)
scheme is proposed for a cache capacity of M = N/K. It is shown that this
scheme achieves a smaller delivery rate than all the known schemes in the
literature. The improvement is then extended to a wider range of cache
capacities through memory-sharing between the proposed scheme and other known
schemes in the literature. Next, the proposed centralized coded caching idea is
exploited in the decentralized setting, in which the identities of the users
that participate in the delivery phase are assumed to be unknown during the
placement phase. It is shown that the proposed decentralized caching scheme
also achieves a delivery rate smaller than the state-of-the-art. Numerical
simulations are also presented to corroborate our theoretical results