18 research outputs found

    Maximizing the number of users in an interactive video-on-demand system

    Get PDF
    Video prefetching is a technique that has been proposed for the transmission of variable-bit-rate (VBR) videos over packet-switched networks. The objective of these protocols is to prefetch future frames at the customers' set-top box (STB) during light load periods. Experimental results have shown that video prefetching is very effective and it achieves much higher network utilization (and potentially larger number of simultaneous connections) than the traditional video smoothing schemes. The previously proposed prefetching algorithms, however, can only be efficiently implemented when there is one centralized server. In a distributed environment there is a large degradation in their performance. In this paper we introduce a new scheme that utilizes smoothing along with prefetching, to overcome the problem of distributed prefetching. We will show that our scheme performs almost as well as the centralized prefetching protocol even though it is implemented in a distributed environment. In addition, we will introduce a call admission control algorithm for a fully interactive Video-on-Demand (VoD) system that utilizes this concept of distributed video prefetching. Using the theory of effective bandwidths, we will develop an admission control algorithm for new requests, based on the user's viewing behavior and the required Quality of Service (QoS).published_or_final_versio

    Turbo-slice-and-patch: an algorithm for metropolitan scale VBR video streaming.

    Get PDF
    Kong Chun Wai.Thesis submitted in: July 2003.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 53-54).Abstracts in English and Chinese.Contentsacknowledgement --- p.IAbstract --- p.II摘要 --- p.IIIChapter Chapter 1 --- Introduction --- p.1Chapter Chapter 2 --- Related Works --- p.4Chapter 2.1 --- Previous Work --- p.4Chapter 2.2 --- Comparison --- p.5Chapter Chapter 3 --- System Architecture --- p.7Chapter 3.1 --- Transmission Scheduling --- p.7Chapter 3.2 --- Admission Control --- p.9Chapter 3.3 --- Challenges in Supporting VBR-encoded Video --- p.10Chapter Chapter 4 --- Priority Scheduling --- p.12Chapter 4.1 --- Static Channel Priority (SCP) --- p.13Chapter 4.2 --- Dynamic Channel Priority (DCP) --- p.16Chapter Chapter 5 --- Turbo-Slice-and-Patch --- p.19Chapter 5.1 --- Video Pre-processing --- p.19Chapter 5.2 --- Bandwidth Allocation --- p.22Chapter 5.3 --- Three-Phase Patching --- p.23Chapter 5.4 --- Client Buffer Requirement --- p.27Chapter Chapter 6 --- Playback Continuity --- p.30Chapter Chapter 7 --- Performance Evaluation --- p.39Chapter 7.1 --- Average Latency --- p.40Chapter 7.2 --- Client Buffer Requirement --- p.43Chapter 7.3 --- Choice of Parameter Rcut --- p.44Chapter 7.4 --- Latency versus Arrival Rate --- p.46Chapter 7.5 --- Server Bandwidth Comparison --- p.48Chapter 7.6 --- Bandwidth Partitioning --- p.50Chapter Chapter 8 --- Conclusions --- p.52Bibliography --- p.5

    Evaluation of unidirectional background push content download services for the delivery of television programs

    Full text link
    Este trabajo de tesis presenta los servicios de descarga de contenido en modo push como un mecanismo eficiente para el envío de contenido de televisión pre-producido sobre redes de difusión. Hoy en día, los operadores de red dedican una cantidad considerable de recursos de red a la entrega en vivo de contenido televisivo, tanto sobre redes de difusión como sobre conexiones unidireccionales. Esta oferta de servicios responde únicamente a requisitos comerciales: disponer de los contenidos televisivos en cualquier momento y lugar. Sin embargo, desde un punto de vista estrictamente académico, el envío en vivo es únicamente un requerimiento para el contenido en vivo, no para contenidos que ya han sido producidos con anterioridad a su emisión. Más aún, la difusión es solo eficiente cuando el contenido es suficientemente popular. Los servicios bajo estudio en esta tesis utilizan capacidad residual en redes de difusión para enviar contenido pre-producido para que se almacene en los equipos de usuario. La propuesta se justifica únicamente por su eficiencia. Por un lado, genera valor de recursos de red que no se aprovecharían de otra manera. Por otro lado, realiza la entrega de contenidos pre-producidos y populares de la manera más eficiente: sobre servicios de descarga de contenidos en difusión. Los resultados incluyen modelos para la popularidad y la duración de contenidos, valiosos para cualquier trabajo de investigación basados en la entrega de contenidos televisivos. Además, la tesis evalúa la capacidad residual disponible en redes de difusión, por medio de estudios empíricos. Después, estos resultados son utilizados en simulaciones que evalúan las prestaciones de los servicios propuestos en escenarios diferentes y para aplicaciones diferentes. La evaluación demuestra que este tipo de servicios son un recurso muy útil para la entrega de contenido televisivo.This thesis dissertation presents background push Content Download Services as an efficient mechanism to deliver pre-produced television content through existing broadcast networks. Nowadays, network operators dedicate a considerable amount of network resources to live streaming live, through both broadcast and unicast connections. This service offering responds solely to commercial requirements: Content must be available anytime and anywhere. However, from a strictly academic point of view, live streaming is only a requirement for live content and not for pre-produced content. Moreover, broadcasting is only efficient when the content is sufficiently popular. The services under study in this thesis use residual capacity in broadcast networks to push popular, pre-produced content to storage capacity in customer premises equipment. The proposal responds only to efficiency requirements. On one hand, it creates value from network resources otherwise unused. On the other hand, it delivers popular pre-produced content in the most efficient way: through broadcast download services. The results include models for the popularity and the duration of television content, valuable for any research work dealing with file-based delivery of television content. Later, the thesis evaluates the residual capacity available in broadcast networks through empirical studies. These results are used in simulations to evaluate the performance of background push content download services in different scenarios and for different applications. The evaluation proves that this kind of services can become a great asset for the delivery of television contentFraile Gil, F. (2013). Evaluation of unidirectional background push content download services for the delivery of television programs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31656TESI

    Anticipatory Buffer Control and Quality Selection for Wireless Video Streaming

    Full text link
    Video streaming is in high demand by mobile users, as recent studies indicate. In cellular networks, however, the unreliable wireless channel leads to two major problems. Poor channel states degrade video quality and interrupt the playback when a user cannot sufficiently fill its local playout buffer: buffer underruns occur. In contrast to that, good channel conditions cause common greedy buffering schemes to pile up very long buffers. Such over-buffering wastes expensive wireless channel capacity. To keep buffering in balance, we employ a novel approach. Assuming that we can predict data rates, we plan the quality and download time of the video segments ahead. This anticipatory scheduling avoids buffer underruns by downloading a large number of segments before a channel outage occurs, without wasting wireless capacity by excessive buffering. We formalize this approach as an optimization problem and derive practical heuristics for segmented video streaming protocols (e.g., HLS or MPEG DASH). Simulation results and testbed measurements show that our solution essentially eliminates playback interruptions without significantly decreasing video quality

    Enabling Large-Scale Peer-to-Peer Stored Video Streaming Service with QoS Support

    Get PDF
    This research aims to enable a large-scale, high-volume, peer-to-peer, stored-video streaming service over the Internet, such as on-line DVD rentals. P2P allows a group of dynamically organized users to cooperatively support content discovery and distribution services without needing to employ a central server. P2P has the potential to overcome the scalability issue associated with client-server based video distribution networks; however, it brings a new set of challenges. This research addresses the following five technical challenges associated with the distribution of streaming video over the P2P network: 1) allow users with limited transmit bandwidth capacity to become contributing sources, 2) support the advertisement and discovery of time-changing and time-bounded video frame availability, 3) Minimize the impact of distribution source losses during video playback, 4) incorporate user mobility information in the selection of distribution sources, and 5) design a streaming network architecture that enables above functionalities.To meet the above requirements, we propose a video distribution network model based on a hybrid architecture between client-server and P2P. In this model, a video is divided into a sequence of small segments and each user executes a scheduling algorithm to determine the order, the timing, and the rate of segment retrievals from other users. The model also employs an advertisement and discovery scheme which incorporates parameters of the scheduling algorithm to allow users to share their life-time of video segment availability information in one advertisement and one query. An accompanying QoS scheme allows reduction in the number of video playback interruptions while one or more distribution sources depart from the service prematurely.The simulation study shows that the proposed model and associated schemes greatly alleviate the bandwidth requirement of the video distribution server, especially when the number of participating users grows large. As much as 90% of load reduction was observed in some experiments when compared to a traditional client-server based video distribution service. A significant reduction is also observed in the number of video presentation interruptions when the proposed QoS scheme is incorporated in the distribution process while certain percentages of distribution sources depart from the service unexpectedly

    Design and Performance Analysis of Functional Split in Virtualized Access Networks

    Get PDF
    abstract: Emerging modular cable network architectures distribute some cable headend functions to remote nodes that are located close to the broadcast cable links reaching the cable modems (CMs) in the subscriber homes and businesses. In the Remote- PHY (R-PHY) architecture, a Remote PHY Device (RPD) conducts the physical layer processing for the analog cable transmissions, while the headend runs the DOCSIS medium access control (MAC) for the upstream transmissions of the distributed CMs over the shared cable link. In contrast, in the Remote MACPHY (R-MACPHY) ar- chitecture, a Remote MACPHY Device (RMD) conducts both the physical and MAC layer processing. The dissertation objective is to conduct a comprehensive perfor- mance comparison of the R-PHY and R-MACPHY architectures. Also, development of analytical delay models for the polling-based MAC with Gated bandwidth alloca- tion of Poisson traffic in the R-PHY and R-MACPHY architectures and conducting extensive simulations to assess the accuracy of the analytical model and to evaluate the delay-throughput performance of the R-PHY and R-MACPHY architectures for a wide range of deployment and operating scenarios. Performance evaluations ex- tend to the use of Ethernet Passive Optical Network (EPON) as transport network between remote nodes and headend. The results show that for long CIN distances above 100 miles, the R-MACPHY architecture achieves significantly shorter mean up- stream packet delays than the R-PHY architecture, especially for bursty traffic. The extensive comparative R-PHY and R-MACPHY comparative evaluation can serve as a basis for the planning of modular broadcast cable based access networks.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Understanding and Efficiently Servicing HTTP Streaming Video Workloads

    Get PDF
    Live and on-demand video streaming has emerged as the most popular application for the Internet. One reason for this success is the pragmatic decision to use HTTP to deliver video content. However, while all web servers are capable of servicing HTTP streaming video workloads, web servers were not originally designed or optimized for video workloads. Web server research has concentrated on requests for small items that exhibit high locality, while video files are much larger and have a popularity distribution with a long tail of less popular content. Given the large number of servers needed to service millions of streaming video clients, there are large potential benefits from even small improvements in servicing HTTP streaming video workloads. To investigate how web server implementations can be improved, we require a benchmark to analyze existing web servers and test alternate implementations, but no such HTTP streaming video benchmark exists. One reason for the lack of a benchmark is that video delivery is undergoing rapid evolution, so we devise a flexible methodology and tools for creating benchmarks that can be readily adapted to changes in HTTP video streaming methods. Using our methodology, we characterize YouTube traffic from early 2011 using several published studies and implement a benchmark to replicate this workload. We then demonstrate that three different widely-used web servers (Apache, nginx and the userver) are all poorly suited to servicing streaming video workloads. We modify the userver to use asynchronous serialized aggressive prefetching (ASAP). Aggressive prefetching uses a single large disk access to service multiple small sequential requests, and serialization prevents the kernel from interleaving disk accesses, which together greatly increase throughput. Using the modified userver, we show that characteristics of the workload and server affect the best prefetch size to use and we provide an algorithm that automatically finds a good prefetch size for a variety of workloads and server configurations. We conduct our own characterization of an HTTP streaming video workload, using server logs obtained from Netflix. We study this workload because, in 2015, Netflix alone accounted for 37% of peak period North American Internet traffic. Netflix clients employ DASH (Dynamic Adaptive Streaming over HTTP) to switch between different bit rates based on changes in network and server conditions. We introduce the notion of chains of sequential requests to represent the spatial locality of workloads and find that even with DASH clients, the majority of bytes are requested sequentially. We characterize rate adaptation by separating sessions into transient, stable and inactive phases, each with distinct patterns of requests. We find that playback sessions are surprisingly stable; in aggregate, 5% of total session duration is spent in transient phases, 79% in stable and 16% in inactive phases. Finally we evaluate prefetch algorithms that exploit knowledge about workload characteristics by simulating the servicing of the Netflix workload. We show that the workload can be serviced with either 13% lower hard drive utilization or 48% less system memory than a prefetch algorithm that makes no use of workload characteristics
    corecore