93 research outputs found
MSPlayer: Multi-Source and multi-Path LeverAged YoutubER
Online video streaming through mobile devices has become extremely popular
nowadays. YouTube, for example, reported that the percentage of its traffic
streaming to mobile devices has soared from 6% to more than 40% over the past
two years. Moreover, people are constantly seeking to stream high quality video
for better experience while often suffering from limited bandwidth. Thanks to
the rapid deployment of content delivery networks (CDNs), popular videos are
now replicated at different sites, and users can stream videos from close-by
locations with low latencies. As mobile devices nowadays are equipped with
multiple wireless interfaces (e.g., WiFi and 3G/4G), aggregating bandwidth for
high definition video streaming has become possible.
We propose a client-based video streaming solution, MSPlayer, that takes
advantage of multiple video sources as well as multiple network paths through
different interfaces. MSPlayer reduces start-up latency and provides high
quality video streaming and robust data transport in mobile scenarios. We
experimentally demonstrate our solution on a testbed and through the YouTube
video service.Comment: accepted to ACM CoNEXT'1
An Experimental Study of the Server-based Unfairness Solutions for the Cross-Protocol Scenario of Adaptive Streaming over HTTP/3 and HTTP/2
منذ إدخال HTTP / 3 ، ركز البحث على تقييم تأثيره على البث التكيفي الحالي عبر HTTP (HAS). من بين هذه الأبحاث ، نظرًا لبروتوكولات النقل غير ذات الصلة ، حظي الظلم عبر البروتوكولات بين HAS عبر HTTP / 3 (HAS / 3) و HAS عبر HTTP / 2 (HAS / 2) باهتمام كبير. لقد وجد أن عملاء HAS / 3 يميلون إلى طلب معدلات بت أعلى من عملاء HAS / 2 لأن النقل QUIC يحصل على عرض نطاق ترددي أعلى لعملائه HAS / 3 من TCP لعملائه HAS / 2. نظرًا لأن المشكلة تنشأ من طبقة النقل ، فمن المحتمل أن حلول الظلم المستندة إلى الخادم يمكن أن تساعد العملاء في التغلب على مثل هذه المشكلة. لذلك ، في هذه الورقة ، تم إجراء دراسة تجريبية لحلول الظلم القائمة على الخادم لسيناريو البروتوكول المتقاطع لـ HAS / 3 و HAS / 2. تظهر النتائج أنه على الرغم من فشل حل توجيه معدل البت في مساعدة العملاء على تحقيق العدالة ، فإن حل تخصيص النطاق الترددي يوفر أداءً فائقًا.Since the introduction of the HTTP/3, research has focused on evaluating its influences on the existing adaptive streaming over HTTP (HAS). Among these research, due to irrelevant transport protocols, the cross-protocol unfairness between the HAS over HTTP/3 (HAS/3) and HAS over HTTP/2 (HAS/2) has caught considerable attention. It has been found that the HAS/3 clients tend to request higher bitrates than the HAS/2 clients because the transport QUIC obtains higher bandwidth for its HAS/3 clients than the TCP for its HAS/2 clients. As the problem originates from the transport layer, it is likely that the server-based unfairness solutions can help the clients overcome such a problem. Therefore, in this paper, an experimental study of the server-based unfairness solutions for the cross-protocol scenario of the HAS/3 and HAS/2 is conducted. The results show that, while the bitrate guidance solution fails to help the clients achieve fairness, the bandwidth allocation solution provides superior performance
The QUIC Fix for Optimal Video Streaming
Within a few years of its introduction, QUIC has gained traction: a
significant chunk of traffic is now delivered over QUIC. The networking
community is actively engaged in debating the fairness, performance, and
applicability of QUIC for various use cases, but these debates are centered
around a narrow, common theme: how does the new reliable transport built on top
of UDP fare in different scenarios? Support for unreliable delivery in QUIC
remains largely unexplored.
The option for delivering content unreliably, as in a best-effort model,
deserves the QUIC designers' and community's attention. We propose extending
QUIC to support unreliable streams and present a simple approach for
implementation. We discuss a simple use case of video streaming---an
application that dominates the overall Internet traffic---that can leverage the
unreliable streams and potentially bring immense benefits to network operators
and content providers. To this end, we present a prototype implementation that,
by using both the reliable and unreliable streams in QUIC, outperforms both TCP
and QUIC in our evaluations.Comment: Published to ACM CoNEXT Workshop on the Evolution, Performance, and
Interoperability of QUIC (EPIQ
Performance evaluation of caching techniques for video on demand workload in named data network
The rapid growing use of the Internet in the contemporary context is mainly for content
distribution. This is derived primarily due to the emergence of Information-Centric Networking (ICN) in the wider domains of academia and industry. Named Data Network (NDN) is one of ICN architectures. In addition, the NDN has been emphasized as the video traffic architecture that ensures smooth communication between the request and receiver of online video. The concise research problem of the current study is the issue of congestion in Video on Demand (VoD) workload caused by frequent storing of signed content object in the local repositories, which leads to buffering problems and data packet loss. The study will assess the NDN cache techniques to select the preferable cache replacement technique suitable for dealing with the congestion issues, and evaluate its performance. To do that, the current study adopts a research process based on the Design Research Methodology (DRM) and VoD approach in order to explain the main activities that produced an increase in the expected findings at the end of the activities or research. Datasets, as well as Internet2 network topology and the statistics of video views were gathered from the PPTV platform. Actually, a total of 221 servers is connected to the network from the same access points as in the real deployment
of PPTV. In addition, an NS3 analysis the performance metrics of caching replacement
technique (LRU, LFU, and FIFO) for VoD in Named Data Network (NDN) in terms of cache hit ratio, throughput, and server load results in reasonable outcomes that appears to serve as a potential replacement with the current implementation of the Internet2 topology, where nodes are distributed randomly. Based on the results, LFU technique gives the preferable result for congestion from among the presented
techniques. Finally, the research finds that the performance metrics of cache hit ratio,
throughput, and server load for the LFU that produces the lowest congestion rate which
is sufficient. Therefore, the researcher concluded that the efficiency of the different replacement techniques needs to be well investigated in order to provide the insights
necessary to implement these techniques in certain context. However, this result enriches
the current understanding of replacement techniques in handling different cache sizes. After having addressed the different replacement techniques and examined their
performances, the performance characteristics along with their expected performance were also found to stimulate a cache model for providing a relatively fast running time of across a broad range of embedded applications
Performance evaluation of caching placement algorithms in named data network for video on demand service
The purpose of this study is to evaluate the performance of caching placement algorithms
(LCD, LCE, Prob, Pprob, Cross, Centrality, and Rand) in Named Data Network (NDN) for Video on Demand (VoD). This study aims to increment the service quality and to decrement the time of download. There are two stages of activities resulted in the outcome of the study: The first is to determine the causes of delay performance
in NDN cache algorithms used in VoD workload. The second activity is the evaluation of the seven cache placement algorithms on the cloud of video content in terms of the key performance metrics: delay time, average cache hit ratio, total reduction in the network footprint, and reduction in load. The NS3 simulations and the Internet2 topology were used to evaluate and analyze the findings of each algorithm, and to compare the results based on cache sizes: 1GB, 10GB, 100GB, and 1TB. This study proves that the different user requests of online videos would lead to delay in network performance. In addition to that the delay also caused by the high increment of video
requests. Also, the outcomes led to conclude that the increase in cache capacity leads
to make the placement algorithms have a significant increase in the average cache hit
ratio, a reduction in server load, and the total reduction in network footprint, which resulted in obtaining a minimized delay time. In addition to that, a conclusion was made
that Centrality is the worst cache placement algorithm based on the results obtained
- …