7,506 research outputs found
Content Popularity Prediction Towards Location-Aware Mobile Edge Caching
Mobile edge caching enables content delivery within the radio access network,
which effectively alleviates the backhaul burden and reduces response time. To
fully exploit edge storage resources, the most popular contents should be
identified and cached. Observing that user demands on certain contents vary
greatly at different locations, this paper devises location-customized caching
schemes to maximize the total content hit rate. Specifically, a linear model is
used to estimate the future content hit rate. For the case where the model
noise is zero-mean, a ridge regression based online algorithm with positive
perturbation is proposed. Regret analysis indicates that the proposed algorithm
asymptotically approaches the optimal caching strategy in the long run. When
the noise structure is unknown, an filter based online algorithm
is further proposed by taking a prescribed threshold as input, which guarantees
prediction accuracy even under the worst-case noise process. Both online
algorithms require no training phases, and hence are robust to the time-varying
user demands. The underlying causes of estimation errors of both algorithms are
numerically analyzed. Moreover, extensive experiments on real world dataset are
conducted to validate the applicability of the proposed algorithms. It is
demonstrated that those algorithms can be applied to scenarios with different
noise features, and are able to make adaptive caching decisions, achieving
content hit rate that is comparable to that via the hindsight optimal strategy.Comment: to appear in IEEE Trans. Multimedi
Jointly Optimal Routing and Caching for Arbitrary Network Topologies
We study a problem of fundamental importance to ICNs, namely, minimizing
routing costs by jointly optimizing caching and routing decisions over an
arbitrary network topology. We consider both source routing and hop-by-hop
routing settings. The respective offline problems are NP-hard. Nevertheless, we
show that there exist polynomial time approximation algorithms producing
solutions within a constant approximation from the optimal. We also produce
distributed, adaptive algorithms with the same approximation guarantees. We
simulate our adaptive algorithms over a broad array of different topologies.
Our algorithms reduce routing costs by several orders of magnitude compared to
prior art, including algorithms optimizing caching under fixed routing.Comment: This is the extended version of the paper "Jointly Optimal Routing
and Caching for Arbitrary Network Topologies", appearing in the 4th ACM
Conference on Information-Centric Networking (ICN 2017), Berlin, Sep. 26-28,
201
Cache Serializability: Reducing Inconsistency in Edge Transactions
Read-only caches are widely used in cloud infrastructures to reduce access
latency and load on backend databases. Operators view coherent caches as
impractical at genuinely large scale and many client-facing caches are updated
in an asynchronous manner with best-effort pipelines. Existing solutions that
support cache consistency are inapplicable to this scenario since they require
a round trip to the database on every cache transaction.
Existing incoherent cache technologies are oblivious to transactional data
access, even if the backend database supports transactions. We propose T-Cache,
a novel caching policy for read-only transactions in which inconsistency is
tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache
improves cache consistency despite asynchronous and unreliable communication
between the cache and the database. We define cache-serializability, a variant
of serializability that is suitable for incoherent caches, and prove that with
unbounded resources T-Cache implements this new specification. With limited
resources, T-Cache allows the system manager to choose a trade-off between
performance and consistency.
Our evaluation shows that T-Cache detects many inconsistencies with only
nominal overhead. We use synthetic workloads to demonstrate the efficacy of
T-Cache when data accesses are clustered and its adaptive reaction to workload
changes. With workloads based on the real-world topologies, T-Cache detects
43-70% of the inconsistencies and increases the rate of consistent transactions
by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability:
Reducing Inconsistency in Edge Transactions," Distributed Computing Systems
(ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201
Fog-enabled Edge Learning for Cognitive Content-Centric Networking in 5G
By caching content at network edges close to the users, the content-centric
networking (CCN) has been considered to enforce efficient content retrieval and
distribution in the fifth generation (5G) networks. Due to the volume,
velocity, and variety of data generated by various 5G users, an urgent and
strategic issue is how to elevate the cognitive ability of the CCN to realize
context-awareness, timely response, and traffic offloading for 5G applications.
In this article, we envision that the fundamental work of designing a cognitive
CCN (C-CCN) for the upcoming 5G is exploiting the fog computing to
associatively learn and control the states of edge devices (such as phones,
vehicles, and base stations) and in-network resources (computing, networking,
and caching). Moreover, we propose a fog-enabled edge learning (FEL) framework
for C-CCN in 5G, which can aggregate the idle computing resources of the
neighbouring edge devices into virtual fogs to afford the heavy delay-sensitive
learning tasks. By leveraging artificial intelligence (AI) to jointly
processing sensed environmental data, dealing with the massive content
statistics, and enforcing the mobility control at network edges, the FEL makes
it possible for mobile users to cognitively share their data over the C-CCN in
5G. To validate the feasibility of proposed framework, we design two
FEL-advanced cognitive services for C-CCN in 5G: 1) personalized network
acceleration, 2) enhanced mobility management. Simultaneously, we present the
simulations to show the FEL's efficiency on serving for the mobile users'
delay-sensitive content retrieval and distribution in 5G.Comment: Submitted to IEEE Communications Magzine, under review, Feb. 09, 201
Energy Efficiency in Cache Enabled Small Cell Networks With Adaptive User Clustering
Using a network of cache enabled small cells, traffic during peak hours can
be reduced considerably through proactively fetching the content that is most
probable to be requested. In this paper, we aim at exploring the impact of
proactive caching on an important metric for future generation networks,
namely, energy efficiency (EE). We argue that, exploiting the correlation in
user content popularity profiles in addition to the spatial repartitions of
users with comparable request patterns, can result in considerably improving
the achievable energy efficiency of the network. In this paper, the problem of
optimizing EE is decoupled into two related subproblems. The first one
addresses the issue of content popularity modeling. While most existing works
assume similar popularity profiles for all users in the network, we consider an
alternative caching framework in which, users are clustered according to their
content popularity profiles. In order to showcase the utility of the proposed
clustering scheme, we use a statistical model selection criterion, namely
Akaike information criterion (AIC). Using stochastic geometry, we derive a
closed-form expression of the achievable EE and we find the optimal active
small cell density vector that maximizes it. The second subproblem investigates
the impact of exploiting the spatial repartitions of users with comparable
request patterns. After considering a snapshot of the network, we formulate a
combinatorial optimization problem that enables to optimize content placement
such that the used transmission power is minimized. Numerical results show that
the clustering scheme enable to considerably improve the cache hit probability
and consequently the EE compared with an unclustered approach. Simulations also
show that the small base station allocation algorithm results in improving the
energy efficiency and hit probability.Comment: 30 pages, 5 figures, submitted to Transactions on Wireless
Communications (15-Dec-2016
- …