3,717 research outputs found

    QoE-Based Low-Delay Live Streaming Using Throughput Predictions

    Full text link
    Recently, HTTP-based adaptive streaming has become the de facto standard for video streaming over the Internet. It allows clients to dynamically adapt media characteristics to network conditions in order to ensure a high quality of experience, that is, minimize playback interruptions, while maximizing video quality at a reasonable level of quality changes. In the case of live streaming, this task becomes particularly challenging due to the latency constraints. The challenge further increases if a client uses a wireless network, where the throughput is subject to considerable fluctuations. Consequently, live streams often exhibit latencies of up to 30 seconds. In the present work, we introduce an adaptation algorithm for HTTP-based live streaming called LOLYPOP (Low-Latency Prediction-Based Adaptation) that is designed to operate with a transport latency of few seconds. To reach this goal, LOLYPOP leverages TCP throughput predictions on multiple time scales, from 1 to 10 seconds, along with an estimate of the prediction error distribution. In addition to satisfying the latency constraint, the algorithm heuristically maximizes the quality of experience by maximizing the average video quality as a function of the number of skipped segments and quality transitions. In order to select an efficient prediction method, we studied the performance of several time series prediction methods in IEEE 802.11 wireless access networks. We evaluated LOLYPOP under a large set of experimental conditions limiting the transport latency to 3 seconds, against a state-of-the-art adaptation algorithm from the literature, called FESTIVE. We observed that the average video quality is by up to a factor of 3 higher than with FESTIVE. We also observed that LOLYPOP is able to reach a broader region in the quality of experience space, and thus it is better adjustable to the user profile or service provider requirements.Comment: Technical Report TKN-16-001, Telecommunication Networks Group, Technische Universitaet Berlin. This TR updated TR TKN-15-00

    Accurate non-intrusive residual bandwidth estimation in WMNs

    Get PDF
    The multi-access scheme of 802.11 wireless networks imposes difficulties in achieving predictable service quality in multi-hop networks. In such networks, the residual capacity of wireless links should be estimated for resource allocation services such as flow admission control. In this paper, we propose an accurate and non-intrusive method to estimate the residual bandwidth of an 802.11 link. Inputs from neighboring network activity measurements and from a basic collision detection mechanism are fed to the analytical model so that the proposed algorithm calculates the maximum allowable traffic level for this link. We evaluate the efficiency of the method via OPNET simulations, and show that the percent estimation error is significantly lower than two other prominent estimation methods, bounded only between 2.5-7.5%. We also demonstrate that flow admission control is successfully achieved in a realistic WMN scenario. Flow control through our proposed algorithm keeps the unsatisfied traffic demand bounded and at a negligibly low level, which is less than an order of magnitude of the other two methods

    Is Our Model for Contention Resolution Wrong?

    Full text link
    Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: nn packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.Comment: Accepted to the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2017

    Approaching Optimal Centralized Scheduling with CSMA-based Random Access over Fading Channels

    Full text link
    Carrier Sense Multiple Access (CSMA) based distributed algorithms can attain the largest capacity region as the centralized Max-Weight policy does. Despite their capability of achieving throughput-optimality, these algorithms can either incur large delay and have large complexity or only operate over non-fading channels. In this letter, by assuming arbitrary back-off time we first propose a fully distributed randomized algorithm whose performance can be pushed to the performance of the centralized Max-Weight policy not only in terms of throughput but also in terms of delay for completely-connected interference networks with fading channels. Then, inspired by the proposed algorithm we introduce an implementable distributed algorithm for practical networks with a reservation scheme. We show that the proposed practical algorithm can still achieve the performance of the centralized Max-Weight policy.Comment: accepted to IEEE Communications Letter

    Modeling, Analysis and Impact of a Long Transitory Phase in Random Access Protocols

    Get PDF
    In random access protocols, the service rate depends on the number of stations with a packet buffered for transmission. We demonstrate via numerical analysis that this state-dependent rate along with the consideration of Poisson traffic and infinite (or large enough to be considered infinite) buffer size may cause a high-throughput and extremely long (in the order of hours) transitory phase when traffic arrivals are right above the stability limit. We also perform an experimental evaluation to provide further insight into the characterisation of this transitory phase of the network by analysing statistical properties of its duration. The identification of the presence as well as the characterisation of this behaviour is crucial to avoid misprediction, which has a significant potential impact on network performance and optimisation. Furthermore, we discuss practical implications of this finding and propose a distributed and low-complexity mechanism to keep the network operating in the high-throughput phase.Comment: 13 pages, 10 figures, Submitted to IEEE/ACM Transactions on Networkin
    • …
    corecore