2 research outputs found
Towards Wi-Fi AP-Assisted Content Prefetching for On-Demand TV Series: A Reinforcement Learning Approach
The emergence of smart Wi-Fi APs (Access Point), which are equipped with huge
storage space, opens a new research area on how to utilize these resources at
the edge network to improve users' quality of experience (QoE) (e.g., a short
startup delay and smooth playback). One important research interest in this
area is content prefetching, which predicts and accurately fetches contents
ahead of users' requests to shift the traffic away during peak periods.
However, in practice, the different video watching patterns among users, and
the varying network connection status lead to the time-varying server load,
which eventually makes the content prefetching problem challenging. To
understand this challenge, this paper first performs a large-scale measurement
study on users' AP connection and TV series watching patterns using
real-traces. Then, based on the obtained insights, we formulate the content
prefetching problem as a Markov Decision Process (MDP). The objective is to
strike a balance between the increased prefetching&storage cost incurred by
incorrect prediction and the reduced content download delay because of
successful prediction. A learning-based approach is proposed to solve this
problem and another three algorithms are adopted as baselines. In particular,
first, we investigate the performance lower bound by using a random algorithm,
and the upper bound by using an ideal offline approach. Then, we present a
heuristic algorithm as another baseline. Finally, we design a reinforcement
learning algorithm that is more practical to work in the online manner. Through
extensive trace-based experiments, we demonstrate the performance gain of our
design. Remarkably, our learning-based algorithm achieves a better precision
and hit ratio (e.g., 80%) with about 70% (resp. 50%) cost saving compared to
the random (resp. heuristic) algorithm
Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms
Prefetching web pages is a well-studied solution to reduce network latency by
predicting users' future actions based on their past behaviors. However, such
techniques are largely unexplored on mobile platforms. Today's privacy
regulations make it infeasible to explore prefetching with the usual strategy
of amassing large amounts of data over long periods and constructing
conventional, "large" prediction models. Our work is based on the observation
that this may not be necessary: Given previously reported mobile-device usage
trends (e.g., repetitive behaviors in brief bursts), we hypothesized that
prefetching should work effectively with "small" models trained on mobile-user
requests collected during much shorter time periods. To test this hypothesis,
we constructed a framework for automatically assessing prediction models, and
used it to conduct an extensive empirical study based on over 15 million HTTP
requests collected from nearly 11,500 mobile users during a 24-hour period,
resulting in over 7 million models. Our results demonstrate the feasibility of
prefetching with small models on mobile platforms, directly motivating future
work in this area. We further introduce several strategies for improving
prediction models while reducing the model size. Finally, our framework
provides the foundation for future explorations of effective prediction models
across a range of usage scenarios.Comment: MOBILESoft 202