913 research outputs found
Live Prefetching for Mobile Computation Offloading
The conventional designs of mobile computation offloading fetch user-specific
data to the cloud prior to computing, called offline prefetching. However, this
approach can potentially result in excessive fetching of large volumes of data
and cause heavy loads on radio-access networks. To solve this problem, the
novel technique of live prefetching is proposed in this paper that seamlessly
integrates the task-level computation prediction and prefetching within the
cloud-computing process of a large program with numerous tasks. The technique
avoids excessive fetching but retains the feature of leveraging prediction to
reduce the program runtime and mobile transmission energy. By modeling the
tasks in an offloaded program as a stochastic sequence, stochastic optimization
is applied to design fetching policies to minimize mobile energy consumption
under a deadline constraint. The policies enable real-time control of the
prefetched-data sizes of candidates for future tasks. For slow fading, the
optimal policy is derived and shown to have a threshold-based structure,
selecting candidate tasks for prefetching and controlling their prefetched data
based on their likelihoods. The result is extended to design close-to-optimal
prefetching policies to fast fading channels. Compared with fetching without
prediction, live prefetching is shown theoretically to always achieve reduction
on mobile energy consumption.Comment: To appear in IEEE Trans. on Wireless Communicatio
Improving Mobile Video Streaming with Mobility Prediction and Prefetching in Integrated Cellular-WiFi Networks
We present and evaluate a procedure that utilizes mobility and throughput
prediction to prefetch video streaming data in integrated cellular and WiFi
networks. The effective integration of such heterogeneous wireless technologies
will be significant for supporting high performance and energy efficient video
streaming in ubiquitous networking environments. Our evaluation is based on
trace-driven simulation considering empirical measurements and shows how
various system parameters influence the performance, in terms of the number of
paused video frames and the energy consumption; these parameters include the
number of video streams, the mobile, WiFi, and ADSL backhaul throughput, and
the number of WiFi hotspots. Also, we assess the procedure's robustness to time
and throughput variability. Finally, we present our initial prototype that
implements the proposed approach.Comment: 7 pages, 15 figure
Flow Level QoE of Video Streaming in Wireless Networks
The Quality of Experience (QoE) of streaming service is often degraded by
frequent playback interruptions. To mitigate the interruptions, the media
player prefetches streaming contents before starting playback, at a cost of
delay. We study the QoE of streaming from the perspective of flow dynamics.
First, a framework is developed for QoE when streaming users join the network
randomly and leave after downloading completion. We compute the distribution of
prefetching delay using partial differential equations (PDEs), and the
probability generating function of playout buffer starvations using ordinary
differential equations (ODEs) for CBR streaming. Second, we extend our
framework to characterize the throughput variation caused by opportunistic
scheduling at the base station, and the playback variation of VBR streaming.
Our study reveals that the flow dynamics is the fundamental reason of playback
starvation. The QoE of streaming service is dominated by the first moments such
as the average throughput of opportunistic scheduling and the mean playback
rate. While the variances of throughput and playback rate have very limited
impact on starvation behavior.Comment: 14 page
Quality of experience-centric management of adaptive video streaming services : status and challenges
Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Streaming ( HAS) has emerged as the dominant standard for streaming videos over the best-effort Internet, thanks to its capability of matching the video quality to the available network resources. In HAS, the video client is equipped with a heuristic that dynamically decides the most suitable quality to stream the content, based on information such as the perceived network bandwidth or the video player buffer status. The goal of this heuristic is to optimize the quality as perceived by the user, the so-called Quality of Experience (QoE). Despite the many advantages brought by the adaptive streaming principle, optimizing users' QoE is far from trivial. Current heuristics are still suboptimal when sudden bandwidth drops occur, especially in wireless environments, thus leading to freezes in the video playout, the main factor influencing users' QoE. This issue is aggravated in case of live events, where the player buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In light of the above, in recent years, several works have been proposed with the aim of extending the classical purely client-based structure of adaptive video streaming, in order to fully optimize users' QoE. In this article, a survey is presented of research works on this topic together with a classification based on where the optimization takes place. This classification goes beyond client-based heuristics to investigate the usage of server-and network-assisted architectures and of new application and transport layer protocols. In addition, we outline the major challenges currently arising in the field of multimedia delivery, which are going to be of extreme relevance in future years
A performance model of speculative prefetching in distributed information systems
Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes
- …