819 research outputs found
Analysis of Buffer Starvation with Application to Objective QoE Optimization of Streaming Services
Our purpose in this paper is to characterize buffer starvations for streaming
services. The buffer is modeled as an M/M/1 queue, plus the consideration of
bursty arrivals. When the buffer is empty, the service restarts after a certain
amount of packets are \emph{prefetched}. With this goal, we propose two
approaches to obtain the \emph{exact distribution} of the number of buffer
starvations, one of which is based on \emph{Ballot theorem}, and the other uses
recursive equations. The Ballot theorem approach gives an explicit result. We
extend this approach to the scenario with a constant playback rate using
T\`{a}kacs Ballot theorem. The recursive approach, though not offering an
explicit result, can obtain the distribution of starvations with
non-independent and identically distributed (i.i.d.) arrival process in which
an ON/OFF bursty arrival process is considered in this work. We further compute
the starvation probability as a function of the amount of prefetched packets
for a large number of files via a fluid analysis. Among many potential
applications of starvation analysis, we show how to apply it to optimize the
objective quality of experience (QoE) of media streaming, by exploiting the
tradeoff between startup/rebuffering delay and starvations.Comment: 9 pages, 7 figures; IEEE Infocom 201
Efficient Proactive Caching for Supporting Seamless Mobility
We present a distributed proactive caching approach that exploits user
mobility information to decide where to proactively cache data to support
seamless mobility, while efficiently utilizing cache storage using a congestion
pricing scheme. The proposed approach is applicable to the case where objects
have different sizes and to a two-level cache hierarchy, for both of which the
proactive caching problem is hard. Additionally, our modeling framework
considers the case where the delay is independent of the requested data object
size and the case where the delay is a function of the object size. Our
evaluation results show how various system parameters influence the delay gains
of the proposed approach, which achieves robust and good performance relative
to an oracle and an optimal scheme for a flat cache structure.Comment: 10 pages, 9 figure
Optimizing Hypervideo Navigation Using a Markov Decision Process Approach
Interaction with hypermedia documents is a required feature for new sophisticated yet flexible multimedia applications. This paper presents an innovative adaptive technique to stream hypervideo that takes into account user behaviour. The objective is to optimize hypervideo prefetching in order to reduce the latency caused by the network. This technique is based on a model provided by a Markov Decision Process approach. The problem is solved using two methods: classical stochastic dynamic programming algorithms and reinforcement learning. Experimental results under stochastic network conditions are very promising
Patterns of Scalable Bayesian Inference
Datasets are growing not just in size but in complexity, creating a demand
for rich models and quantification of uncertainty. Bayesian methods are an
excellent fit for this demand, but scaling Bayesian inference is a challenge.
In response to this challenge, there has been considerable recent work based on
varying assumptions about model structure, underlying computational resources,
and the importance of asymptotic correctness. As a result, there is a zoo of
ideas with few clear overarching principles.
In this paper, we seek to identify unifying principles, patterns, and
intuitions for scaling Bayesian inference. We review existing work on utilizing
modern computing resources with both MCMC and variational approximation
techniques. From this taxonomy of ideas, we characterize the general principles
that have proven successful for designing scalable inference procedures and
comment on the path forward
From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey
Context data is in demand more than ever with the rapid increase in the
development of many context-aware Internet of Things applications. Research in
context and context-awareness is being conducted to broaden its applicability
in light of many practical and technical challenges. One of the challenges is
improving performance when responding to large number of context queries.
Context Management Platforms that infer and deliver context to applications
measure this problem using Quality of Service (QoS) parameters. Although
caching is a proven way to improve QoS, transiency of context and features such
as variability, heterogeneity of context queries pose an additional real-time
cost management problem. This paper presents a critical survey of
state-of-the-art in adaptive data caching with the objective of developing a
body of knowledge in cost- and performance-efficient adaptive caching
strategies. We comprehensively survey a large number of research publications
and evaluate, compare, and contrast different techniques, policies, approaches,
and schemes in adaptive caching. Our critical analysis is motivated by the
focus on adaptively caching context as a core research problem. A formal
definition for adaptive context caching is then proposed, followed by
identified features and requirements of a well-designed, objective optimal
adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys
Journal at this time of publishing in arxiv.or
A taxonomy of web prediction algorithms
Web prefetching techniques are an attractive solution to reduce the user-perceived latency. These techniques are driven by a prediction engine or algorithm that guesses following actions of web users. A large amount of prediction algorithms has been proposed since the first prefetching approach was published, although it is only over the last two or three years when they have begun to be successfully implemented in commercial products. These algorithms can be implemented in any element of the web architecture and can use a wide variety of information as input. This affects their structure, data system, computational resources and accuracy. The knowledge of the input information and the understanding of how it can be handled to make predictions can help to improve the design of current prediction engines, and consequently prefetching techniques. This paper analyzes fifty of the most relevant algorithms proposed along 15 years of prefetching research and proposes a taxonomy where the algorithms are classified according to the input data they use. For each group, the main advantages and shortcomings are highlighted. © 2012 Elsevier Ltd. All rights reserved.This work has been partially supported by Spanish Ministry of Science and Innovation under Grant TIN2009-08201, Generalitat Valenciana under Grant GV/2011/002 and Universitat Politecnica de Valencia under Grant PAID-06-10/2424.Domenech, J.; De La Ossa Perez, BA.; Sahuquillo Borrás, J.; Gil Salinas, JA.; Pont Sanjuan, A. (2012). A taxonomy of web prediction algorithms. Expert Systems with Applications. 39(9):8496-8502. https://doi.org/10.1016/j.eswa.2012.01.140S8496850239
Building Internet caching systems for streaming media delivery
The proxy has been widely and successfully used to cache the static Web objects fetched by a client so that the subsequent clients requesting the same Web objects can be served directly from the proxy instead of other sources faraway, thus reducing the server\u27s load, the network traffic and the client response time. However, with the dramatic increase of streaming media objects emerging on the Internet, the existing proxy cannot efficiently deliver them due to their large sizes and client real time requirements.;In this dissertation, we design, implement, and evaluate cost-effective and high performance proxy-based Internet caching systems for streaming media delivery. Addressing the conflicting performance objectives for streaming media delivery, we first propose an efficient segment-based streaming media proxy system model. This model has guided us to design a practical streaming proxy, called Hyper-Proxy, aiming at delivering the streaming media data to clients with minimum playback jitter and a small startup latency, while achieving high caching performance. Second, we have implemented Hyper-Proxy by leveraging the existing Internet infrastructure. Hyper-Proxy enables the streaming service on the common Web servers. The evaluation of Hyper-Proxy on the global Internet environment and the local network environment shows it can provide satisfying streaming performance to clients while maintaining a good cache performance. Finally, to further improve the streaming delivery efficiency, we propose a group of the Shared Running Buffers (SRB) based proxy caching techniques to effectively utilize proxy\u27s memory. SRB algorithms can significantly reduce the media server/proxy\u27s load and network traffic and relieve the bottlenecks of the disk bandwidth and the network bandwidth.;The contributions of this dissertation are threefold: (1) we have studied several critical performance trade-offs and provided insights into Internet media content caching and delivery. Our understanding further leads us to establish an effective streaming system optimization model; (2) we have designed and evaluated several efficient algorithms to support Internet streaming content delivery, including segment caching, segment prefetching, and memory locality exploitation for streaming; (3) having addressed several system challenges, we have successfully implemented a real streaming proxy system and deployed it in a large industrial enterprise
- …