10 research outputs found

    Analyzing the potential benefits of CDN augmentation strategies for internet video workloads

    Full text link
    Video viewership over the Internet is rising rapidly, and market pre-dictions suggest that video will comprise over 90 % of Internet traf-fic in the next few years. At the same time, there have been signs that the Content Delivery Network (CDN) infrastructure is being stressed by ever-increasing amounts of video traffic. To meet these growing demands, the CDN infrastructure must be designed, pro-visioned and managed appropriately. Federated telco-CDNs and hybrid P2P-CDNs are two content delivery infrastructure designs that have gained significant industry attention recently. We ob-served several user access patterns that have important implica-tions to these two designs in our unique dataset consisting of 30 million video sessions spanning around two months of video view-ership from two large Internet video providers. These include par-tial interest in content, regional interests, temporal shift in peak load and patterns in evolution of interest. We analyze the impact of our findings on these two designs by performing a large scale measurement study. Surprisingly, we find significant amount of synchronous viewing behavior for Video On Demand (VOD) con-tent, which makes hybrid P2P-CDN approach feasible for VOD and suggest new strategies for CDNs to reduce their infrastructure costs. We also find that federation can significantly reduce telco-CDN provisioning costs by as much as 95%

    Performance evaluation of caching techniques for video on demand workload in named data network

    Get PDF
    The rapid growing use of the Internet in the contemporary context is mainly for content distribution. This is derived primarily due to the emergence of Information-Centric Networking (ICN) in the wider domains of academia and industry. Named Data Network (NDN) is one of ICN architectures. In addition, the NDN has been emphasized as the video traffic architecture that ensures smooth communication between the request and receiver of online video. The concise research problem of the current study is the issue of congestion in Video on Demand (VoD) workload caused by frequent storing of signed content object in the local repositories, which leads to buffering problems and data packet loss. The study will assess the NDN cache techniques to select the preferable cache replacement technique suitable for dealing with the congestion issues, and evaluate its performance. To do that, the current study adopts a research process based on the Design Research Methodology (DRM) and VoD approach in order to explain the main activities that produced an increase in the expected findings at the end of the activities or research. Datasets, as well as Internet2 network topology and the statistics of video views were gathered from the PPTV platform. Actually, a total of 221 servers is connected to the network from the same access points as in the real deployment of PPTV. In addition, an NS3 analysis the performance metrics of caching replacement technique (LRU, LFU, and FIFO) for VoD in Named Data Network (NDN) in terms of cache hit ratio, throughput, and server load results in reasonable outcomes that appears to serve as a potential replacement with the current implementation of the Internet2 topology, where nodes are distributed randomly. Based on the results, LFU technique gives the preferable result for congestion from among the presented techniques. Finally, the research finds that the performance metrics of cache hit ratio, throughput, and server load for the LFU that produces the lowest congestion rate which is sufficient. Therefore, the researcher concluded that the efficiency of the different replacement techniques needs to be well investigated in order to provide the insights necessary to implement these techniques in certain context. However, this result enriches the current understanding of replacement techniques in handling different cache sizes. After having addressed the different replacement techniques and examined their performances, the performance characteristics along with their expected performance were also found to stimulate a cache model for providing a relatively fast running time of across a broad range of embedded applications

    Performance evaluation of caching placement algorithms in named data network for video on demand service

    Get PDF
    The purpose of this study is to evaluate the performance of caching placement algorithms (LCD, LCE, Prob, Pprob, Cross, Centrality, and Rand) in Named Data Network (NDN) for Video on Demand (VoD). This study aims to increment the service quality and to decrement the time of download. There are two stages of activities resulted in the outcome of the study: The first is to determine the causes of delay performance in NDN cache algorithms used in VoD workload. The second activity is the evaluation of the seven cache placement algorithms on the cloud of video content in terms of the key performance metrics: delay time, average cache hit ratio, total reduction in the network footprint, and reduction in load. The NS3 simulations and the Internet2 topology were used to evaluate and analyze the findings of each algorithm, and to compare the results based on cache sizes: 1GB, 10GB, 100GB, and 1TB. This study proves that the different user requests of online videos would lead to delay in network performance. In addition to that the delay also caused by the high increment of video requests. Also, the outcomes led to conclude that the increase in cache capacity leads to make the placement algorithms have a significant increase in the average cache hit ratio, a reduction in server load, and the total reduction in network footprint, which resulted in obtaining a minimized delay time. In addition to that, a conclusion was made that Centrality is the worst cache placement algorithm based on the results obtained

    ISP-friendly Peer-assisted On-demand Streaming of Long Duration Content in BBC iPlayer

    Full text link
    In search of scalable solutions, CDNs are exploring P2P support. However, the benefits of peer assistance can be limited by various obstacle factors such as ISP friendliness - requiring peers to be within the same ISP, bitrate stratification - the need to match peers with others needing similar bitrate, and partial participation - some peers choosing not to redistribute content. This work relates potential gains from peer assistance to the average number of users in a swarm, its capacity, and empirically studies the effects of these obstacle factors at scale, using a month-long trace of over 2 million users in London accessing BBC shows online. Results indicate that even when P2P swarms are localised within ISPs, up to 88% of traffic can be saved. Surprisingly, bitrate stratification results in 2 large sub-swarms and does not significantly affect savings. However, partial participation, and the need for a minimum swarm size do affect gains. We investigate improvements to gain from increasing content availability through two well-studied techniques: content bundling - combining multiple items to increase availability, and historical caching of previously watched items. Bundling proves ineffective as increased server traffic from larger bundles outweighs benefits of availability, but simple caching can considerably boost traffic gains from peer assistance.Comment: In Proceedings of IEEE INFOCOM 201

    Cognitive Video Streaming

    Get PDF
    Video-on-demand (VoD) streaming services are becoming increasingly popular due to their flexibility to allow users to access their favorite video contents anytime, anywhere from a wide range of access devices such as smart phones, computers and TV. The content providers rely on highly satisfied subscribers for revenue generation and there has been significant efforts in developing approaches to “estimate” the quality of experience (QoE) of VoD subscribers. But a key issue is that QoE is not defined, appropriate proxies needs to be found for QoE, via the streaming metrics (the quality of service (QoS) metrics) that are largely based on initial startup time, buffering delays, average bit rate and average throughput and other relevant factors such as the video content and user behavior and other external factors. The ultimate objective of the content provider is to elevate the QoE of all the subscribers at the cost of minimal network resources, such as hardware resources and bandwidth. We propose a cognitive video streaming strategy in order to ensure the QoE of subscribers while utilizing minimal network resources. The proposed cognitive video streaming architecture consists of an estimation module, a prediction module and an adaptation module. Then, we demonstrate the prediction module of the cognitive video streaming architecture through a play time prediction tool. For this purpose, the applicability of different machine learning algorithms such as k-nearest neighbor, neural network regression and survival models are experimented with; then, we develop an approach to identify the most relevant factors that contributed to the prediction. The proposed approaches are tested on data set provided by Comcast Cable

    動画配信サービスのための軽量分散協調キャッシュ基盤の研究

    Get PDF
     時刻に関係なく動画を視聴できるオンデマンド動画配信(Video-on-Demand; VoD) サービスの普及に伴い,インターネット通信量は急激に増大している.インターネット通信量は5年で3倍の速度で増加すると見込まれており,2020年にはインターネット通信量の80%以上を動画通信が占めると予測されている.増大する通信量はルータやスイッチなどの通信機材の増設や刷新で対応できるが,通信量の増加に伴い継続的な増設が求められるため,経済的ではない.  VoDサービス事業者は一般に,コンテンツ配信ネットワーク(Content Delivery Network;CDN) 事業者に大容量コンテンツの配信を委託している.CDN事業者は,世界中のユーザから近い位置にキャッシュサーバを設置し,大規模なキャッシュネットワークを構築している.各キャッシュサーバはコンテンツがネットワークを通過する際にデータをコピーしておき,再利用することで,通信量を削減している.動画コンテンツは一度アップロードされると滅多に更新されることがないため,キャッシュサーバは重複する通信を削減できる.このようにすることで,キャッシュサーバは遠くの配信サーバとの通信回数を削減し,効率よくインターネット通信量を削減する.しかしながら,キャッシュ容量には上限があり,動画コンテンツは継続的に追加されるため,1台のキャッシュサーバにすべての動画コンテンツを保持させることは現実的ではない.また,CDN事業者のキャッシュサーバは設置拠点が限られているため,キャッシュサーバの通信量や,キャッシュサーバまでの経路を提供するインターネットサービスプロバイダ(Internet Service Provider; ISP) 内の通信量を削減できない.その結果,通信路の混雑や通信量の増大を招いてしまう. CDN事業者は,複数のキャッシュサーバでキャッシュされたコンテンツを共有することで実効キャッシュ容量を拡大し,通信量削減と負荷分散を図ろうとしている.このような手段はデータ転送路を制御するトラフィックエンジニアリングをもとに実現されるが,CDN事業者はISPの物理ネットワーク形状やリンク帯域等に関する知識をもたないため,効率よくキャッシュサーバを協調させることが難しい.そのため,近年はISP事業者が自社のネットワーク中にキャッシュサーバを配置し,トラフィックエンジニアリングを駆使して協調動作させることで,分散協調キャッシュネットワークの構築を検討している.このような方法をとることで,ネットワークとキャッシュサーバの両方を同一の事業者が管理するため,効率の良い通信量削減が実現できる.ISPが管理するISP内のキャッシュネットワークはTelco-CDNと呼ばれる. 最近の研究では,Telco-CDNのキャッシュサーバを効率よく管理する方法が提案されている.典型的には,複数のキャッシュサーバで保持するコンテンツの規則を設定しておき,各キャッシュサーバで異なるコンテンツを保持させることで,実効キャッシュ容量を拡大し,通信量の削減が実現されている.しかしながら,このような方法は各コンテンツのアクセス頻度情報を考慮しないため,人気上位のコンテンツを保持する数台のキャッシュサーバに負荷が集中してしまう.また,効率の良いコンテンツ配置を求める最適化問題を設定して計算することで,通信量削減効果の高い分散協調キャッシュを実現する研究も行われている.しかしながら,最適化問題の計算には長時間の計算を要する一方で,VoDサービスの動画アクセスパターンは1時間で20-40%程度変化してしまうため,計算が終了した時点で最適なコンテンツ配置との乖離が生まれ,通信量削減効果が低減してしまう. 本論文では,VoDサービスのアクセス傾向を効率よくキャッシュする2種類のキャッシュ制御アルゴリズムを提案し,組み合わせて利用することで,通信量の削減を図る.まず第1に,2種類の異なるキャッシュアルゴリズムを組み合わせたハイブリッドキャッシュアルゴリズムを提案する.このアルゴリズムは,異なるキャッシュアルゴリズムをネットワーク中に混在させたり,1台のキャッシュサーバのストレージ領域を分割してアルゴリズムを混合して利用することで,急激に変動する動画アクセスを効率よくキャッシュして,高い通信量削減効果を維持する.アクセス頻度の高いコンテンツを保持するLeast Frequently Used (LFU)ベースのアルゴリズムで高い通信量削減効果を実現し,最近アクセスされたコンテンツを優先的に保持するLeast Recently Used (LRU)ベースのアルゴリズムで急激なアクセス傾向の変化に追従する. 第2に,色タグ情報を用いた分散協調キャッシュ制御手法を提案し,キャッシュネットワーク中のコンテンツ配置を効率よく制御する.この方法は,コンテンツとキャッシュサーバの両方に色タグを設定し,色がマッチする場合にキャッシュするよう制御することで,コンテンツを分散配置し,実効キャッシュ容量を拡大する.具体的には,先に述べたハイブリッドキャッシュのLFU領域に色タグを設定し,大容量な分散協調領域として利用するとともに,小容量なLRU領域ではタグ情報にかかわらずコンテンツをキャッシュさせることで,動画アクセス傾向の変化に追従する.アクセス頻度の高いコンテンツほど多数の色を割り当てることでユーザからのホップ数を短縮し,コンテンツ配信サーバだけでなくISPネットワーク内部の通信量も効率よく削減する.色タグ情報の軽量管理手法と,色タグ情報を活用する経路制御アルゴルズムも合わせて提案し,軽量な計算オーバヘッドで高い通信量削減効果を実現する. ハイブリッドキャッシュアルゴリズムの評価では,新規にアクセス頻度の高いコンテンツを追加しても,LFUベースのキャッシュ領域で高い通信量削減効果を達成しつつ,LRUベースのキャッシュ領域で通信量を維持できることが示された.また,色タグ情報に基づく分散協調キャッシュアルゴリズムは,遺伝的アルゴリズムで計算した準最適制御に近い通信量削減効果を実現し,その計算オーバヘッドも小さく抑えられることを確認した.また,色タグ情報を活用した経路制御アルゴリズムは,最短経路制御と比較して31.9%の通信量削減効果を得られることを確認した.電気通信大学201

    Understanding and Efficiently Servicing HTTP Streaming Video Workloads

    Get PDF
    Live and on-demand video streaming has emerged as the most popular application for the Internet. One reason for this success is the pragmatic decision to use HTTP to deliver video content. However, while all web servers are capable of servicing HTTP streaming video workloads, web servers were not originally designed or optimized for video workloads. Web server research has concentrated on requests for small items that exhibit high locality, while video files are much larger and have a popularity distribution with a long tail of less popular content. Given the large number of servers needed to service millions of streaming video clients, there are large potential benefits from even small improvements in servicing HTTP streaming video workloads. To investigate how web server implementations can be improved, we require a benchmark to analyze existing web servers and test alternate implementations, but no such HTTP streaming video benchmark exists. One reason for the lack of a benchmark is that video delivery is undergoing rapid evolution, so we devise a flexible methodology and tools for creating benchmarks that can be readily adapted to changes in HTTP video streaming methods. Using our methodology, we characterize YouTube traffic from early 2011 using several published studies and implement a benchmark to replicate this workload. We then demonstrate that three different widely-used web servers (Apache, nginx and the userver) are all poorly suited to servicing streaming video workloads. We modify the userver to use asynchronous serialized aggressive prefetching (ASAP). Aggressive prefetching uses a single large disk access to service multiple small sequential requests, and serialization prevents the kernel from interleaving disk accesses, which together greatly increase throughput. Using the modified userver, we show that characteristics of the workload and server affect the best prefetch size to use and we provide an algorithm that automatically finds a good prefetch size for a variety of workloads and server configurations. We conduct our own characterization of an HTTP streaming video workload, using server logs obtained from Netflix. We study this workload because, in 2015, Netflix alone accounted for 37% of peak period North American Internet traffic. Netflix clients employ DASH (Dynamic Adaptive Streaming over HTTP) to switch between different bit rates based on changes in network and server conditions. We introduce the notion of chains of sequential requests to represent the spatial locality of workloads and find that even with DASH clients, the majority of bytes are requested sequentially. We characterize rate adaptation by separating sessions into transient, stable and inactive phases, each with distinct patterns of requests. We find that playback sessions are surprisingly stable; in aggregate, 5% of total session duration is spent in transient phases, 79% in stable and 16% in inactive phases. Finally we evaluate prefetch algorithms that exploit knowledge about workload characteristics by simulating the servicing of the Netflix workload. We show that the workload can be serviced with either 13% lower hard drive utilization or 48% less system memory than a prefetch algorithm that makes no use of workload characteristics

    Dynamic adaptive video streaming with minimal buffer sizes

    Get PDF
    Recently, adaptive streaming has been widely adopted in video streaming services to improve the Quality-of-Experience (QoE) of video delivery over the Internet. However, state-of-the-art bitrate adaptation achieves satisfactory performance only with extensive buffering of several tens of seconds. This leads to high playback latency in video delivery, which is undesirable especially in the context of live content with a low upper bound on the latency. Therefore, this thesis aims at pushing the application of adaptive streaming to its limit with respect to the buffer size, which is the dominant factor of the streaming latency. In this work, we first address the minimum buffering size required in adaptive streaming, which provides us with guidelines to determine a reasonable low latency for streaming systems. Then, we tackle the fundamental challenge of achieving such a low-latency streaming by developing a novel adaptation algorithm that stabilizes buffer dynamics despite a small buffer size. We also present advanced improvements by designing a novel adaptation architecture with low-delay feedback for the bitrate selection and optimizing the underlying transport layer to offer efficient realtime streaming. Experimental evaluations demonstrate that our approach achieves superior QoE in adaptive video streaming, especially in the particularly challenging case of low-latency streaming.In letzter Zeit setzen immer mehr Anbieter von Video-Streaming im Internet auf adaptives Streaming um die Nutzererfahrung (QoE) zu verbessern. Allerdings erreichen aktuelle Bitrate-Adaption-Algorithmen nur dann eine zufriedenstellende Leistung, wenn sehr große Puffer in der Größenordnung von mehreren zehn Sekunden eingesetzt werden. Dies führt zu großen Latenzen bei der Wiedergabe, was vor allem bei Live-Übertragungen mit einer niedrigen Obergrenze für Verzögerungen unerwünscht ist. Aus diesem Grund zielt die vorliegende Dissertation darauf ab adaptive Streaming-Anwendung im Bezug auf die Puffer-Größe zu optimieren da dies den Hauptfaktor für die Streaming-Latenz darstellt. In dieser Arbeit untersuchen wir zuerst die minimale benötigte Puffer-Größe für adaptives Streaming, was uns ermöglicht eine sinnvolle Untergrenze für die erreichbare Latenz festzulegen. Im nächsten Schritt gehen wir die grundlegende Herausforderung an dieses Optimum zu erreichen. Hierfür entwickeln wir einen neuartigen Adaptionsalgorithmus, der es ermöglicht den Füllstand des Puffers trotz der geringen Größe zu stabilisieren. Danach präsentieren wir weitere Verbesserungen indem wir eine neue Adaptions-Architektur für die Datenraten-Anpassung mit geringer Feedback-Verzögerung designen und das darunter liegende Transportprotokoll optimieren um effizientes Echtzeit-Streaming zu ermöglichen. Durch experimentelle Prüfung zeigen wir, dass unser Ansatz eine verbesserte Nutzererfahrung für adaptives Streaming erreicht, vor allem in besonders herausfordernden Fällen, wenn Streaming mit geringer Latenz gefordert ist
    corecore