1,431 research outputs found
On Optimizing for Epidemic Live Streaming
International audienceOptimal dissemination schemes have previously been studied for peer-to-peer live streaming applications. Live streaming being a delay-sensitive application, fine tuning of dissemination parameters is crucial. In this paper, we investigate optimal sizing of chunks, the units of data exchange, and probe sets, the number peers a given node probes before transmitting chunks. Chunk size can have significant impact on diffusion rate (chunk miss ratio), diffusion delay, and overhead. The size of the probe set can also affect these metrics, primarily through the choices available for chunk dissemination. We perform extensive simulations on the so-called random-peer, latest-useful dissemination scheme. Our results show that size does matter, with the optimal size being not too small in both cases
A Comprehensive Analysis of Swarming-based Live Streaming to Leverage Client Heterogeneity
Due to missing IP multicast support on an Internet scale, over-the-top media
streams are delivered with the help of overlays as used by content delivery
networks and their peer-to-peer (P2P) extensions. In this context,
mesh/pull-based swarming plays an important role either as pure streaming
approach or in combination with tree/push mechanisms. However, the impact of
realistic client populations with heterogeneous resources is not yet fully
understood. In this technical report, we contribute to closing this gap by
mathematically analysing the most basic scheduling mechanisms latest deadline
first (LDF) and earliest deadline first (EDF) in a continuous time Markov chain
framework and combining them into a simple, yet powerful, mixed strategy to
leverage inherent differences in client resources. The main contributions are
twofold: (1) a mathematical framework for swarming on random graphs is proposed
with a focus on LDF and EDF strategies in heterogeneous scenarios; (2) a mixed
strategy, named SchedMix, is proposed that leverages peer heterogeneity. The
proposed strategy, SchedMix is shown to outperform the other two strategies
using different abstractions: a mean-field theoretic analysis of buffer
probabilities, simulations of a stochastic model on random graphs, and a
full-stack implementation of a P2P streaming system.Comment: Technical report and supplementary material to
http://ieeexplore.ieee.org/document/7497234
Dynamic Resource Management in Clouds: A Probabilistic Approach
Dynamic resource management has become an active area of research in the
Cloud Computing paradigm. Cost of resources varies significantly depending on
configuration for using them. Hence efficient management of resources is of
prime interest to both Cloud Providers and Cloud Users. In this work we suggest
a probabilistic resource provisioning approach that can be exploited as the
input of a dynamic resource management scheme. Using a Video on Demand use case
to justify our claims, we propose an analytical model inspired from standard
models developed for epidemiology spreading, to represent sudden and intense
workload variations. We show that the resulting model verifies a Large
Deviation Principle that statistically characterizes extreme rare events, such
as the ones produced by "buzz/flash crowd effects" that may cause workload
overflow in the VoD context. This analysis provides valuable insight on
expectable abnormal behaviors of systems. We exploit the information obtained
using the Large Deviation Principle for the proposed Video on Demand use-case
for defining policies (Service Level Agreements). We believe these policies for
elastic resource provisioning and usage may be of some interest to all
stakeholders in the emerging context of cloud networkingComment: IEICE Transactions on Communications (2012). arXiv admin note:
substantial text overlap with arXiv:1209.515
On Resource Aware Algorithms in Epidemic Live Streaming
Epidemic-style diffusion schemes have been previously proposed for achieving
peer-to-peer live streaming. Their performance trade-offs have been deeply
analyzed for homogeneous systems, where all peers have the same upload
capacity. However, epidemic schemes designed for heterogeneous systems have not
been completely understood yet. In this report we focus on the peer selection
process and propose a generic model that encompasses a large class of
algorithms. The process is modeled as a combination of two functions, an aware
one and an agnostic one. By means of simulations, we analyze the
awareness-agnostism trade-offs on the peer selection process and the impact of
the source distribution policy in non-homogeneous networks. We highlight that
the early diffusion of a given chunk is crucial for its overall diffusion
performance, and a fairness trade-off arises between the performance of
heterogeneous peers, as a function of the level of awareness
Finding the Graph of Epidemic Cascades
We consider the problem of finding the graph on which an epidemic cascade
spreads, given only the times when each node gets infected. While this is a
problem of importance in several contexts -- offline and online social
networks, e-commerce, epidemiology, vulnerabilities in infrastructure networks
-- there has been very little work, analytical or empirical, on finding the
graph. Clearly, it is impossible to do so from just one cascade; our interest
is in learning the graph from a small number of cascades.
For the classic and popular "independent cascade" SIR epidemics, we
analytically establish the number of cascades required by both the global
maximum-likelihood (ML) estimator, and a natural greedy algorithm. Both results
are based on a key observation: the global graph learning problem decouples
into local problems -- one for each node. For a node of degree , we show
that its neighborhood can be reliably found once it has been infected times (for ML on general graphs) or times (for greedy on
trees). We also provide a corresponding information-theoretic lower bound of
; thus our bounds are essentially tight. Furthermore, if we
are given side-information in the form of a super-graph of the actual graph (as
is often the case), then the number of cascade samples required -- in all cases
-- becomes independent of the network size .
Finally, we show that for a very general SIR epidemic cascade model, the
Markov graph of infection times is obtained via the moralization of the network
graph.Comment: To appear in Proc. ACM SIGMETRICS/Performance 201
Infective flooding in low-duty-cycle networks, properties and bounds
Flooding information is an important function in many networking applications. In some networks, as wireless sensor networks or some ad-hoc networks it is so essential as to dominate the performance of the entire system. Exploiting some recent results based on the distributed computation of the eigenvector centrality of nodes in the network graph and classical dynamic diffusion models on graphs, this paper derives a novel theoretical framework for efficient resource allocation to flood information in mesh networks with low duty-cycling without the need to build a distribution tree or any other distribution overlay. Furthermore, the method requires only local computations based on each node neighborhood. The model provides lower and upper stochastic bounds on the flooding delay averages on all possible sources with high probability. We show that the lower bound is very close to the theoretical optimum. A simulation-based implementation allows the study of specific topologies and graph models as well as scheduling heuristics and packet losses. Simulation experiments show that simple protocols based on our resource allocation strategy can easily achieve results that are very close to the theoretical minimum obtained building optimized overlays on the network
How to Explore a Sustainable Profit Model for Self-publishing--Based on the Perspective of New Institutional Economics
As a new media form in the new era, self-media has been developing rapidly in China. With the globalization of economy, the continuous improvement of technology and the popularization and perfection of mobile Internet technology, the self-media industry is facing great opportunities and challenges. The profitability of self-media is getting richer and richer, and a variety of more systematic profit models are gradually formed, but the overall profitability is not optimistic, which is not unrelated to the high transaction costs and the lack of management system. This paper will explore more sustainable profit models based on the existing profit models of self-media from the perspective of new institutional economics, using transaction costs, contracts, institutional change and other related theories, and give policy recommendations for the problems
QoE-Aware Resource Allocation For Crowdsourced Live Streaming: A Machine Learning Approach
In the last decade, empowered by the technological advancements of mobile devices
and the revolution of wireless mobile network access, the world has witnessed an
explosion in crowdsourced live streaming. Ensuring a stable high-quality playback
experience is compulsory to maximize the viewersâ Quality of Experience and the
content providersâ profits. This can be achieved by advocating a geo-distributed cloud
infrastructure to allocate the multimedia resources as close as possible to viewers, in
order to minimize the access delay and video stalls.
Additionally, because of the instability of network condition and the heterogeneity of
the end-users capabilities, transcoding the original video into multiple bitrates is
required. Video transcoding is a computationally expensive process, where generally a
single cloud instance needs to be reserved to produce one single video bitrate
representation. On demand renting of resources or inadequate resources reservation
may cause delay of the video playback or serving the viewers with a lower quality. On
the other hand, if resources provisioning is much higher than the required, the
extra resources will be wasted.
In this thesis, we introduce a prediction-driven resource allocation framework, to
maximize the QoE of viewers and minimize the resources allocation cost. First, by
exploiting the viewersâ locations available in our unique dataset, we implement a machine learning model to predict the viewersâ number near each geo-distributed cloud
site. Second, based on the predicted results that showed to be close to the actual values,
we formulate an optimization problem to proactively allocate resources at the viewersâ
proximity. Additionally, we will present a trade-off between the video access delay and
the cost of resource allocation.
Considering the complexity and infeasibility of our offline optimization to respond to
the volume of viewing requests in real-time, we further extend our work, by introducing
a resources forecasting and reservation framework for geo-distributed cloud sites. First,
we formulate an offline optimization problem to allocate transcoding resources at the
viewersâ proximity, while creating a tradeoff between the network cost and viewers
QoE. Second, based on the optimizer resource allocation decisions on historical live
videos, we create our time series datasets containing historical records of the optimal
resources needed at each geo-distributed cloud site. Finally, we adopt machine learning
to build our distributed time series forecasting models to proactively forecast the exact
needed transcoding resources ahead of time at each geo-distributed cloud site.
The results showed that the predicted number of transcoding resources needed in each
cloud site is close to the optimal number of transcoding resources
- âŠ