11 research outputs found

    Avoiding Interruptions - QoE Trade-offs in Block-coded Streaming Media Applications

    Get PDF
    We take an analytical approach to study Quality of user Experience (QoE) for video streaming applications. First, we show that random linear network coding applied to blocks of video frames can significantly simplify the packet requests at the network layer and save resources by avoiding duplicate packet reception. Network coding allows us to model the receiver's buffer as a queue with Poisson arrivals and deterministic departures. We consider the probability of interruption in video playback as well as the number of initially buffered packets (initial waiting time) as the QoE metrics. We characterize the optimal trade-off between these metrics by providing upper and lower bounds on the minimum initial buffer size, required to achieve certain level of interruption probability for different regimes of the system parameters. Our bounds are asymptotically tight as the file size goes to infinity.Comment: Submitted to ISIT 2010 - Full versio

    Access-Network Association Policies for Media Streaming in Heterogeneous Environments

    Full text link
    We study the design of media streaming applications in the presence of multiple heterogeneous wireless access methods with different throughputs and costs. Our objective is to analytically characterize the trade-off between the usage cost and the Quality of user Experience (QoE), which is represented by the probability of interruption in media playback and the initial waiting time. We model each access network as a server that provides packets to the user according to a Poisson process with a certain rate and cost. Blocks are coded using random linear codes to alleviate the duplicate packet reception problem. Users must take decisions on how many packets to buffer before playout, and which networks to access during playout. We design, analyze and compare several control policies with a threshold structure. We formulate the problem of finding the optimal control policy as an MDP with a probabilistic constraint. We present the HJB equation for this problem by expanding the state space, and exploit it as a verification method for optimality of the proposed control law.Comment: submitted to CDC 201

    Achieving the Optimal Steaming Capacity and Delay Using Random Regular Digraphs in P2P Networks

    Full text link
    In earlier work, we showed that it is possible to achieve O(logN)O(\log N) streaming delay with high probability in a peer-to-peer network, where each peer has as little as four neighbors, while achieving any arbitrary fraction of the maximum possible streaming rate. However, the constant in the O(logN)O(log N) delay term becomes rather large as we get closer to the maximum streaming rate. In this paper, we design an alternative pairing and chunk dissemination algorithm that allows us to transmit at the maximum streaming rate while ensuring that all, but a negligible fraction of the peers, receive the data stream with O(logN)O(\log N) delay with high probability. The result is established by examining the properties of graph formed by the union of two or more random 1-regular digraphs, i.e., directed graphs in which each node has an incoming and an outgoing node degree both equal to one

    A Comprehensive Analysis of Swarming-based Live Streaming to Leverage Client Heterogeneity

    Full text link
    Due to missing IP multicast support on an Internet scale, over-the-top media streams are delivered with the help of overlays as used by content delivery networks and their peer-to-peer (P2P) extensions. In this context, mesh/pull-based swarming plays an important role either as pure streaming approach or in combination with tree/push mechanisms. However, the impact of realistic client populations with heterogeneous resources is not yet fully understood. In this technical report, we contribute to closing this gap by mathematically analysing the most basic scheduling mechanisms latest deadline first (LDF) and earliest deadline first (EDF) in a continuous time Markov chain framework and combining them into a simple, yet powerful, mixed strategy to leverage inherent differences in client resources. The main contributions are twofold: (1) a mathematical framework for swarming on random graphs is proposed with a focus on LDF and EDF strategies in heterogeneous scenarios; (2) a mixed strategy, named SchedMix, is proposed that leverages peer heterogeneity. The proposed strategy, SchedMix is shown to outperform the other two strategies using different abstractions: a mean-field theoretic analysis of buffer probabilities, simulations of a stochastic model on random graphs, and a full-stack implementation of a P2P streaming system.Comment: Technical report and supplementary material to http://ieeexplore.ieee.org/document/7497234

    The asymptotic behavior of minimum buffer size requirements in large P2P streaming networks

    No full text
    The growth of real-time content streaming over the Internet has resulted in the use of peer-to-peer (P2P) approaches for scalable content delivery. In such P2P streaming systems, each peer maintains a playout buffer of content chunks which it attempts to fill by contacting other peers in the network. The objective is to ensure that the chunk to be played out is available with high probability while keeping the buffer size small. Given that a particular peer has been selected, a \emph{policy} is a rule that suggests which chunks should be requested by the peer from other peers.. We consider consider a number of recently suggested policies consistent with buffer minimization for a given target of skip free playout. We first study a \emph{rarest-first} policy that attempts to obtain chunks farthest from playout, and a \emph{greedy} policy that attempts to obtain chunks nearest to playout. We show that they both have similar buffer scalings (as a function of the number of peers of target probability of skip-free probability). We then study a hybrid policy which achieves order sense improvements over both policies and can achieve order optimal performance. We validate our results using simulations

    Metrics, fundamental trade-offs and control policies for delay-sensitive applications in volatile environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 137-142).With the explosion of consumer demand, media streaming will soon be the dominant type of Internet traffic. Since such applications are intrinsically delay-sensitive, the conventional network control policies and coding algorithms may not be appropriate tools for data dissemination over networks. The major issue with design and analysis of delay-sensitive applications is the notion of delay, which significantly varies across different applications and time scales. We present a framework for studying the problem of media streaming in an unreliable environment. The focus of this work is on end-user experience for such applications. First, we take an analytical approach to study fundamental rate-delay-reliability trade-offs in the context of media streaming for a single receiver system. We consider the probability of interruption in media playback (buffer underflow) as well as the number of initially buffered packets (initial waiting time) as the Quality of user Experience (QoE) metrics. We characterize the optimal trade-off between these metrics as a function of system parameters such as the packet arrival rate and the file size, for different channel models. For a memoryless channel, we model the receiver's queue dynamics as an M/D/1 queue. Then, we show that for arrival rates slightly larger than the play rate, the minimum initial buffering required to achieve certain level of interruption probability remains bounded as the file size grows. For the case where the arrival rate and the play rate match, the minimum initial buffer size should scale as the square root of the file size. We also study media streaming over channels with memory, modeled using Markovian arrival processes. We characterize the optimal trade-off curves for the infinite file size case, in such Markovian environments. Second, we generalize the results to the case of multiple servers or peers streaming to a single receiver. Random linear network coding allows us to simplify the packet selection strategies and alleviate issues such as duplicate packet reception. We show that the multi-server streaming problem over a memoryless channel can be transformed into a single-server streaming problem, for which we have characterized QoE trade-offs. Third, we study the design of media streaming applications in the presence of multiple heterogeneous wireless access methods with different access costs. Our objective is to analytically characterize the trade-off between usage cost and QoE metrics. We model each access network as a server that provides packets to the user according to a Poisson process with a certain rate and cost. User must make a decision on how many packets to buffer before playback, and which networks to access during the playback. We design, analyze and compare several control policies. In particular, we show that a simple Markov policy with a threshold structure performs the best. We formulate the problem of finding the optimal control policy as a Markov Decision Process (MDP) with a probabilistic constraint. We present the Hamilton-Jacobi-Bellman (HJB) equation for this problem by expanding the state space, and exploit it as a verification method for optimality of the proposed control policy. We use the tools and techniques developed for media streaming applications in the context of power supply networks. We study the value of storage in securing reliability of a system with uncertain supply and demand, and supply friction. We assume storage, when available, can be used to compensate, fully or partially, for the surge in demand or loss of supply. We formulate the problem of optimal utilization of storage with the objective of maximizing system reliability as minimization of the expected discounted cost of blackouts over an infinite horizon. We show that when the stage cost is linear in the size of the blackout, the optimal policy is myopic in the sense that all shocks are compensated by storage up to the available level of storage. However, when the stage cost is strictly convex, it may be optimal to curtail some of the demand and allow a small current blackout in the interest of maintaining a higher level of reserve to avoid a large blackout in the future. Finally, we examine the value of storage capacity in improving system's reliability, as well as the effects of the associated optimal policies under different stage costs on the probability distribution of blackouts.by Ali ParandehGheibi.Ph.D
    corecore