41 research outputs found

    Multi-stream partitioning and parity rate allocation of scalable video content for efficient IPTV delivery

    Get PDF
    We address the joint problem of clustering heterogenous clients and allocating scalable video source rate and FEC redundancy in IPTV systems. We propose a streaming solution that delivers varying portions of the scalably encoded content to different client subsets, together with suitably selected parity data. We formulate an optimization problem where the receivers are clustered depending on the quality of their connection so that the average video quality in the IPTV system is maximized. Then we propose a novel algorithm for determining optimally the client clusters, the source and parity rate allocation to each cluster, and the set of serving rates at which the source+parity data is delivered to the clients. We implement our system through a novel design based on scalable video coding that allows for much more efficient network utilization relative to the case of source versioning. Through simulations we demonstrate that the proposed solution substantially outperforms baseline IPTV schemes that multicast the same source and FEC streams to the whole client population, as is commonly done in practice today

    Error and Congestion Resilient Video Streaming over Broadband Wireless

    Get PDF
    In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC) codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine

    Provider-Controlled Bandwidth Management for HTTP-based Video Delivery

    Get PDF
    Over the past few years, a revolution in video delivery technology has taken place as mobile viewers and over-the-top (OTT) distribution paradigms have significantly changed the landscape of video delivery services. For decades, high quality video was only available in the home via linear television or physical media. Though Web-based services brought video to desktop and laptop computers, the dominance of proprietary delivery protocols and codecs inhibited research efforts. The recent emergence of HTTP adaptive streaming protocols has prompted a re-evaluation of legacy video delivery paradigms and introduced new questions as to the scalability and manageability of OTT video delivery. This dissertation addresses the question of how to enable for content and network service providers the ability to monitor and manage large numbers of HTTP adaptive streaming clients in an OTT environment. Our early work focused on demonstrating the viability of server-side pacing schemes to produce an HTTP-based streaming server. We also investigated the ability of client-side pacing schemes to work with both commodity HTTP servers and our HTTP streaming server. Continuing our client-side pacing research, we developed our own client-side data proxy architecture which was implemented on a variety of mobile devices and operating systems. We used the portable client architecture as a platform for investigating different rate adaptation schemes and algorithms. We then concentrated on evaluating the network impact of multiple adaptive bitrate clients competing for limited network resources, and developing schemes for enforcing fair access to network resources. The main contribution of this dissertation is the definition of segment-level client and network techniques for enforcing class of service (CoS) differentiation between OTT HTTP adaptive streaming clients. We developed a segment-level network proxy architecture which works transparently with adaptive bitrate clients through the use of segment replacement. We also defined a segment-level rate adaptation algorithm which uses download aborts to enforce CoS differentiation across distributed independent clients. The segment-level abstraction more accurately models application-network interactions and highlights the difference between segment-level and packet-level time scales. Our segment-level CoS enforcement techniques provide a foundation for creating scalable managed OTT video delivery services

    Adaptive delay-constrained internet media transport

    Get PDF
    Reliable transport layer Internet protocols do not satisfy the requirements of packetized, real-time multimedia streams. The available thesis motivates and defines predictable reliability as a novel, capacity-approaching transport paradigm, supporting an application-specific level of reliability under a strict delay constraint. This paradigm is being implemented into a new protocol design -- the Predictably Reliable Real-time Transport protocol (PRRT). In order to predictably achieve the desired level of reliability, proactive and reactive error control must be optimized under the application\u27s delay constraint. Hence, predictably reliable error control relies on stochastic modeling of the protocol response to the modeled packet loss behavior of the network path. The result of the joined modeling is periodically evaluated by a reliability control policy that validates the protocol configuration under the application constraints and under consideration of the available network bandwidth. The adaptation of the protocol parameters is formulated into a combinatorial optimization problem that is solved by a fast search algorithm incorporating explicit knowledge about the search space. Experimental evaluation of PRRT in real Internet scenarios demonstrates that predictably reliable transport meets the strict QoS constraints of high-quality, audio-visual streaming applications.Zuverlässige Internet-Protokolle auf Transport-Layer erfüllen nicht die Anforderungen paketierter Echtzeit-Multimediaströme. Die vorliegende Arbeit motiviert und definiert Predictable Reliability als ein neuartiges, kapazitäterreichendes Transport-Paradigma, das einen anwendungsspezifischen Grad an Zuverlässigkeit unter strikter Zeitbegrenzung unterstützt. Dieses Paradigma wird in ein neues Protokoll-Design implementiert -- das Predictably Reliable Real-time Transport Protokoll (PRRT). Um prädizierbar einen gewünschten Grad an Zuverlässigkeit zu erreichen, müssen proaktive und reaktive Maßnahmen zum Fehlerschutz unter der Zeitbegrenzung der Anwendung optimiert werden. Daher beruht Fehlerschutz mit Predictable Reliability auf der stochastischen Modellierung des Protokoll-Verhaltens unter modelliertem Paketverlust-Verhalten des Netzwerkpfades. Das Ergebnis der kombinierten Modellierung wird periodisch durch eine Reliability Control Strategie ausgewertet, die die Konfiguration des Protokolls unter den Begrenzungen der Anwendung und unter Berücksichtigung der verfügbaren Netzwerkbandbreite validiert. Die Adaption der Protokoll-Parameter wird durch ein kombinatorisches Optimierungsproblem formuliert, welches von einem schnellen Suchalgorithmus gelöst wird, der explizites Wissen über den Suchraum einbezieht. Experimentelle Auswertung von PRRT in realen Internet-Szenarien demonstriert, dass Transport mit Predictable Reliability die strikten Auflagen hochqualitativer, audiovisueller Streaming-Anwendungen erfüllt

    Adaptive robust video broadcast via satellite

    Get PDF
    © 2016 Springer Science+Business Media New YorkWith increasing demand for multimedia content over channels with limited bandwidth and heavy packet losses, higher coding efficiency and stronger error resiliency is required more than ever before. Both the coding efficiency and error resiliency are two opposing processes that require appropriate balancing. On the source encoding side the video encoder H.264/AVC can provide higher compression with strong error resiliency, while on the channel error correction coding side the raptor code has proven its effectiveness, with only modest overhead required for the recovery of lost data. This paper compares the efficiency and overhead of both the raptor codes and the error resiliency techniques of video standards so that both can be balanced for better compression and quality. The result is also improved by confining the robust stream to the period of poor channel conditions by adaptively switching between the video streams using switching frames introduced in H.264/AVC. In this case the video stream is initially transmitted without error resiliency assuming the channel to be completely error free, and then the robustness is increased based on the channel conditions and/or user demand. The results showed that although switching can increase the peak signal to noise ratio in the presence of losses but at the same time its excessive repetition can be irritating to the viewers. Therefore to evaluate the perceptual quality of the video streams and to find the optimum number of switching during a session, these streams were scored by different viewers for quality of enhancement. The results of the proposed scheme show an increase of 3 to 4 dB in peak signal to noise ratio with acceptable quality of enhancement

    An Energy-efficient Live Video Coding and Communication over Unreliable Channels

    Get PDF
    In the field of multimedia communications there exist many important applications where live or real-time video data is captured by a camera, compressed and transmitted over the channel which can be very unreliable and, at the same time, computational resources or battery capacity of the transmission device are very limited. For example, such scenario holds for video transmission for space missions, vehicle-to-infrastructure video delivery, multimedia wireless sensor networks, wireless endoscopy, video coding on mobile phones, high definition wireless video surveillance and so on. Taking into account such restrictions, a development of efficient video coding techniques for these applications is a challenging problem. The most popular video compression standards, such as H.264/AVC, are based on the hybrid video coding concept, which is very efficient when video encoding is performed off-line or non real-time and the pre-encoded video is played back. However, the high computational complexity of the encoding and the high sensitivity of the hybrid video bit stream to losses in the communication channel constitute a significant barrier of using these standards for the applications mentioned above. In this thesis, as an alternative to the standards, a video coding based on three-dimensional discrete wavelet transform (3-D DWT) is considered as a candidate to provide a good trade-off between encoding efficiency, computational complexity and robustness to channel losses. Efficient tools are proposed to reduce the computational complexity of the 3-D DWT codec. These tools cover all levels of the codec’s development such as adaptive binary arithmetic coding, bit-plane entropy coding, wavelet transform, packet loss protection based on error-correction codes and bit rate control. These tools can be implemented as end-to-end solution and directly used in real-life scenarios. The thesis provides theoretical, simulation and real-world results which show that the proposed 3-D DWT codec can be more preferable than the standards for live video coding and communication over highly unreliable channels and or in systems where the video encoding computational complexity or power consumption plays a critical role

    Quality of Experience and Adaptation Techniques for Multimedia Communications

    Get PDF
    The widespread use of multimedia services on the World Wide Web and the advances in end-user portable devices have recently increased the user demands for better quality. Moreover, providing these services seamlessly and ubiquitously on wireless networks and with user mobility poses hard challenges. To meet these challenges and fulfill the end-user requirements, suitable strategies need to be adopted at both application level and network level. At the application level rate and quality have to be adapted to time-varying bandwidth limitations, whereas on the network side a mechanism for efficient use of the network resources has to be implemented, to provide a better end-user Quality of Experience (QoE) through better Quality of Service (QoS). The work in this thesis addresses these issues by first investigating multi-stream rate adaptation techniques for Scalable Video Coding (SVC) applications aimed at a fair provision of QoE to end-users. Rate Distortion (R-D) models for real-time and non real-time video streaming have been proposed and a rate adaptation technique is also developed to minimize with fairness the distortion of multiple videos with difference complexities. To provide resiliency against errors, the effect of Unequal Error protection (UXP) based on Reed Solomon (RS) encoding with erasure correction has been also included in the proposed R-D modelling. Moreover, to improve the support of QoE at the network level for multimedia applications sensitive to delays, jitters and packet drops, a technique to prioritise different traffic flows using specific QoS classes within an intermediate DiffServ network integrated with a WiMAX access system is investigated. Simulations were performed to test the network under different congestion scenarios
    corecore