469 research outputs found

    Real-time interactive video streaming over lossy networks: high performance low delay error resilient algorithms

    Get PDF
    According to Cisco's latest forecast, two-thirds of the world's mobile data traffic and 62 percent of the consumer Internet traffic will be video data by the end of 2016. However, the wireless networks and Internet are unreliable, where the video traffic may undergo packet loss and delay. Thus robust video streaming over unreliable networks, i.e., Internet, wireless networks, is of great importance in facing this challenge. Specifically, for the real-time interactive video streaming applications, such as video conference and video telephony, the allowed end-to-end delay is limited, which makes the robust video streaming an even more difficult task. In this thesis, we are going to investigate robust video streaming for real-time interactive applications, where the tolerated end-to-end delay is limited. Intra macroblock refreshment is an effective tool to stop error propagations in the prediction loop of video decoder, whereas redundant coding is a commonly used method to prevent error from happening for video transmission over lossy networks. In this thesis two schemes that jointly use intra macroblock refreshment and redundant coding are proposed. In these schemes, in addition to intra coding, we proposed to add two redundant coding methods to enhance the transmission robustness of the coded bitstreams. The selection of error resilient coding tools, i.e., intra coding and/or redundant coding, and the parameters for redundant coding are determined using the end-to-end rate-distortion optimization. Another category of methods to provide error resilient capacity is using forward error correction (FEC) codes. FEC is widely studied to protect streamed video over unreliable networks, with Reed-Solomon (RS) erasure codes as its commonly used implementation method. As a block-based error correcting code, on the one hand, enlarging the block size can enhance the performance of the RS codes; on the other hand, large block size leads to long delay which is not tolerable for real-time video applications. In this thesis two sub-GOP (Group of Pictures, formed by I-frame and all the following P/B-frames) based FEC schemes are proposed to improve the performance of Reed-Solomon codes for real-time interactive video applications. The first one, named DSGF (Dynamic sub-GOP FEC Coding), is designed for the ideal case, where no transmission network delay is taken into consideration. The second one, named RVS-LE (Real-time Video Streaming scheme exploiting the Late- and Early-arrival packets), is more practical, where the video transmission network delay is considered, and the late- and early-arrival packets are fully exploited. Of the two approaches, the sub-GOP, which contains more than one video frame, is dynamically tuned and used as the RS coding block to get the optimal performance. For the proposed DSGF approach, although the overall error resilient performance is higher than the conventional FEC schemes, that protect the streamed video frame by frame, its video quality fluctuates within the Sub-GOP. To mitigate this problem, in this thesis, another real-time video streaming scheme using randomized expanding Reed-Solomon code is proposed. In this scheme, the Reed-Solomon coding block includes not only the video packets of the current frame, but also all the video packets of previous frames in the current group of pictures (GOP). At the decoding side, the parity-check equations of the current frameare jointly solved with all the parity-check equations of the previous frames. Since video packets of the following frames are not encompassed in the RS coding block, no delay will be caused for waiting for the video or parity packets of the following frames both at encoding and decoding sides. The main contribution of this thesis is investigating the trade-off between the video transmission delay caused by FEC encoding/decoding dependency, the FEC error-resilient performance, and the computational complexity. By leveraging the methods proposed in this thesis, proper error-resilient tools and system parameters could be selected based on the video sequence characteristics, the application requirements, and the available channel bandwidth and computational resources. For example, for the applications that can tolerate relatively long delay, sub-GOP based approach is a suitable solution. For the applications where the end-to-end delay is stringent and the computational resource is sufficient (e.g. CPU is fast), it could be a wise choice to use the randomized expanding Reed-Solomon code

    eCMT-SCTP: Improving Performance of Multipath SCTP with Erasure Coding Over Lossy Links

    Get PDF
    Performance of transport protocols on lossy links is a well-researched topic, however there are only a few proposals making use of the opportunities of erasure coding within the multipath transport protocol context. In this paper, we investigate performance improvements of multipath CMT-SCTP with the novel integration of the on-the-fly erasure code within congestion control and reliability mechanisms. Our contributions include: integration of transport protocol and erasure codes with regards to congestion control; proposal for a variable retransmission delay parameter (aRTX) adjustment; performance evaluation of CMT-SCTP with erasure coding with simulations. We have implemented the explicit congestion notification (ECN) and erasure coding schemes in NS-2, evaluated and demonstrated results of improvement both for application goodput and decline of spurious retransmission. Our results show that we can achieve from 10% to 80% improvements in goodput under lossy network conditions without a significant penalty and minimal overhead due to the encoding-decoding process

    Joint On-the-Fly Network Coding/Video Quality Adaptation for Real-Time Delivery

    Get PDF
    This paper introduces a redundancy adaptation algorithm for an on-the-fly erasure network coding scheme called Tetrys in the context of real-time video transmission. The algorithm exploits the relationship between the redundancy ratio used by Tetrys and the gain or loss in encoding bit rate from changing a video quality parameter called the Quantization Parameter (QP). Our evaluations show that with equal or less bandwidth occupation, the video protected by Tetrys with redundancy adaptation algorithm obtains a PSNR gain up to or more 4 dB compared to the video without Tetrys protection. We demonstrate that the Tetrys redundancy adaptation algorithm performs well with the variations of both loss pattern and delay induced by the networks. We also show that Tetrys with the redundancy adaptation algorithm outperforms FEC with and without redundancy adaptation

    Characterization of Band Codes for Pollution-Resilient Peer-to-Peer Video Streaming

    Get PDF
    We provide a comprehensive characterization of band codes (BC) as a resilient-by-design solution to pollution attacks in network coding (NC)-based peer-to-peer live video streaming. Consider one malicious node injecting bogus coded packets into the network: the recombinations at the nodes generate an avalanche of novel coded bogus packets. Therefore, the malicious node can cripple the communication by injecting into the network only a handful of polluted packets. Pollution attacks are typically addressed by identifying and isolating the malicious nodes from the network. Pollution detection is, however, not straightforward in NC as the nodes exchange coded packets. Similarly, malicious nodes identification is complicated by the ambiguity between malicious nodes and nodes that have involuntarily relayed polluted packets. This paper addresses pollution attacks through a radically different approach which relies on BCs. BCs are a family of rateless codes originally designed for controlling the NC decoding complexity in mobile applications. Here, we exploit BCs for the totally different purpose of recombining the packets at the nodes so to avoid that the pollution propagates by adaptively adjusting the coding parameters. Our streaming experiments show that BCs curb the propagation of the pollution and restore the quality of the distributed video stream

    Efficient and Effective Schemes for Streaming Media Delivery

    Get PDF
    The rapid expansion of the Internet and the increasingly wide deployment of wireless networks provide opportunities to deliver streaming media content to users at anywhere, anytime. To ensure good user experience, it is important to battle adversary effects, such as delay, loss and jitter. In this thesis, we first study efficient loss recovery schemes, which require pure XOR operations. In particular, we propose a novel scheme capable of recovering up to 3 packet losses, and it has the lowest complexity among all known schemes. We also propose an efficient algorithm for array codes decoding, which achieves significant throughput gain and energy savings over conventional codes. We believe these schemes are applicable to streaming applications, especially in wireless environments. We then study quality adaptation schemes for client buffer management. Our control-theoretic approach results in an efficient online rate control algorithm with analytically tractable performance. Extensive experimental results show that three goals are achieved: fast startup, continuous playback in the face of severe congestion, and maximal quality and smoothness over the entire streaming session. The scheme is later extended to streaming with limited quality levels, which is then directly applicable to existing systems

    Exploiting Flow Relationships to Improve the Performance of Distributed Applications

    Get PDF
    Application performance continues to be an issue even with increased Internet bandwidth. There are many reasons for poor application performance including unpredictable network conditions, long round trip times, inadequate transmission mechanisms, or less than optimal application designs. In this work, we propose to exploit flow relationships as a general means to improve Internet application performance. We define a relationship to exist between two flows if the flows exhibit temporal proximity within the same scope, where a scope may either be between two hosts or between two clusters of hosts. Temporal proximity can either be in parallel or near-term sequential. As part of this work, we first observe that flow relationships are plentiful and they can be exploited to improve application performance. Second, we establish a framework on possible techniques to exploit flow relationships. In this framework, we summarize the improvements that can be brought by these techniques into several types and also use a taxonomy to break Internet applications into different categories based on their traffic characteristics and performance concerns. This approach allows us to investigate how a technique helps a group of applications rather than a particular one. Finally, we investigate several specific techniques under the framework and use them to illustrate how flow relationships are exploited to achieve a variety of improvements. We propose and evaluate a list of techniques including piggybacking related domain names, data piggybacking, enhanced TCP ACKs, packet aggregation, and critical packet piggybacking. We use them as examples to show how particular flow relationships can be used to improve applications in different ways such as reducing round trips, providing better quality of information, reducing the total number of packets, and avoiding timeouts. Results show that the technique of piggybacking related domain names can significantly reduce local cache misses and also reduce the same number of domain name messages. The data piggybacking technique can provide packet-efficient throughput in the reverse direction of a TCP connection without sacrificing forward throughput. The enhanced ACK approach provides more detailed and complete information about the state of the forward direction that could be used by a TCP implementation to obtain better throughput under different network conditions. Results for packet aggregation show only a marginal gain of packet savings due to the current traffic patterns. Finally, results for critical packet piggybacking demonstrate a big potential in using related flows to send duplicate copies to protect performance-critical packets from loss

    Adaptive delivery of real-time streaming video

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 87-92).While there is an increasing demand for streaming video applications on the Internet, various network characteristics make the deployment of these applications more challenging than traditional Internet applications like email and the Web. The applications that transmit data over the Internet must cope with the time-varying bandwidth and delay characteristics of the Internet and must be resilient to packet loss. This thesis examines these challenges and presents a system design and implementation that ameliorates some of the important problems with video streaming over the Internet. Video sequences are typically compressed in a format such as MPEG-4 to achieve bandwidth efficiency. Video compression exploits redundancy between frames to achieve higher compression. However, packet loss can be detrimental to compressed video with interdependent frames because errors potentially propagate across many frames. While the need for low latency prevents the retransmission of all lost data, we leverage the characteristics of MPEG-4 to selectively retransmit only the most important data in order to limit the propagation of errors. We quantify the effects of packet loss on the quality of MPEG-4 video, develop an analytical model to explain these effects, and present an RTP-compatible protocol-which we call SR-RTP--to adaptively deliver higher quality video in the face of packet loss. The Internet's variable bandwidth and delay make it difficult to achieve high utilization, Tcp friendliness, and a high-quality constant playout rate; a video streaming system should adapt to these changing conditions and tailor the quality of the transmitted bitstream to available bandwidth. Traditional congestion avoidance schemes such as TCP's additive-increase/multiplicative/decrease (AIMD) cause large oscillations in transmission rates that degrade the perceptual quality of the video stream. To combat bandwidth variation, we design a scheme for performing quality adaptation of layered video for a general family of congestion control algorithms called binomial congestion control and show that a combination of smooth congestion control and clever receiver-buffered quality adaptation can reduce oscillations, increase interactivity, and deliver higher quality video for a given amount of buffering. We have integrated this selective reliability and quality adaptation into a publicly available software library. Using this system as a testbed, we show that the use of selective reliability can greatly increase the quality of received video, and that the use of binomial congestion control and receiver quality adaptation allow for increased user interactivity and better video quality.by Nicholas G. Feamster.M.Eng

    Error resilient stereoscopic video streaming using model-based fountain codes

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Ph.D.) -- Bilkent University, 2009.Includes bibliographical references leaves 101-110.Error resilient digital video streaming has been a challenging problem since the introduction and deployment of early packet switched networks. One of the most recent advances in video coding is observed on multi-view video coding which suggests methods for the compression of correlated multiple image sequences. The existing multi-view compression techniques increase the loss sensitivity and necessitate the use of efficient loss recovery schemes. Forward Error Correction (FEC) is an efficient, powerful and practical tool for the recovery of lost data. A novel class of FEC codes is Fountain codes which are suitable to be used with recent video codecs, such as H.264/AVC, and LT and Raptor codes are practical examples of this class. Although there are many studies on monoscopic video, transmission of multi-view video through lossy channels with FEC have not been explored yet. Aiming at this deficiency, an H.264-based multi-view video codec and a model-based Fountain code are combined to generate an effi- cient error resilient stereoscopic streaming system. Three layers of stereoscopic video with unequal importance are defined in order to exploit the benefits of Unequal Error Protection (UEP) with FEC. Simply, these layers correspond to intra frames of left view, predicted frames of left view and predicted frames of right view. The Rate-Distortion (RD) characteristics of these dependent layers are de- fined by extending the RD characteristics of monoscopic video. The parameters of the models are obtained with curve fitting using the RD samples of the video, and satisfactory results are achieved where the average difference between the analytical models and RD samples is between 1.00% and 9.19%. An heuristic analytical model of the performance of Raptor codes is used to obtain the residual number of lost packets for given channel bit rate, loss rate, and protection rate. This residual number is multiplied with the estimated average distortion of the loss of a single Network Abstraction Layer (NAL) unit to obtain the total transmission distortion. All these models are combined to minimize the end-toend distortion and obtain optimal encoder bit rates and UEP rates. When the proposed system is used, the simulation results demonstrate up to 2dB increase in quality compared to equal error protection and only left view error protection. Furthermore, Fountain codes are analyzed in the finite length region, and iterative performance models are derived without any assumptions or asymptotical approximations. The performance model of the belief-propagation (BP) decoder approximates either the behavior of a single simulation results or their average depending on the parameters of the LT code. The performance model of the maximum likelihood decoder approximates the average of simulation results more accurately compared to the model of the BP decoder. Raptor codes are modeled heuristically based on the exponential decay observed on the simulation results, and the model parameters are obtained by line of best fit. The analytical models of systematic and non-systematic Raptor codes accurately approximate the experimental average performance.Tan, A SerdarPh.D

    Split-Domain TCP-Friendly Protocol For MPEG-4 Adaptive Rate Video Streaming Over 3G Networks

    Get PDF
    The imminent inception of third-generation (3G) mobile communication networks offers an unprecedented opportunity for the development of video streaming applications through wireless Internet access. Different design challenges exist in implementing video streaming connections spanning both wired and wireless domains. A split-domain TCP-friendly streaming video transmission protocol is presented based on adaptive rate encoding in the MPEG-4 video format. Network simulations are conducted to demonstrate the benefits and viability of such a video streaming scheme over existing options. Further feature enhancements and refinements are necessary for the proposed protocol to achieve its full potential

    Protocols and Algorithms for Adaptive Multimedia Systems

    Get PDF
    The deployment of WebRTC and telepresence systems is going to start a wide-scale adoption of high quality real-time communication. Delivering high quality video usually corresponds to an increase in required network capacity and also requires an assurance of network stability. A real-time multimedia application that uses the Real-time Transport Protocol (RTP) over UDP needs to implement congestion control since UDP does not implement any such mechanism. This thesis is about enabling congestion control for real-time communication, and deploying it on the public Internet containing a mixture of wired and wireless links. A congestion control algorithm relies on congestion cues, such as RTT and loss. Hence, in this thesis, we first propose a framework for classifying congestion cues. We classify the congestion cues as a combination of: where they are measured or observed? And, how is the sending endpoint notified? For each there are two options, i.e., the cues are either observed and reported by an in-path or by an off-path source, and, the cue is either reported in-band or out-of-band, which results in four combinations. Hence, the framework provides options to look at congestion cues beyond those reported by the receiver. We propose a sender-driven, a receiver-driven and a hybrid congestion control algorithm. The hybrid algorithm relies on both the sender and receiver co-operating to perform congestion control. Lastly, we compare the performance of these different algorithms. We also explore the idea of using capacity notifications from middleboxes (e.g., 3G/LTE base stations) along the path as cues for a congestion control algorithm. Further, we look at the interaction between error-resilience mechanisms and show that FEC can be used in a congestion control algorithm for probing for additional capacity. We propose Multipath RTP (MPRTP), an extension to RTP, which uses multiple paths for either aggregating capacity or for increasing error-resilience. We show that our proposed scheduling algorithm works in diverse scenarios (e.g., 3G and WLAN, 3G and 3G, etc.) with paths with varying latencies. Lastly, we propose a network coverage map service (NCMS), which aggregates throughput measurements from mobile users consuming multimedia services. The NCMS sends notifications to its subscribers about the upcoming network conditions, which take these notifications into account when performing congestion control. In order to test and refine the ideas presented in this thesis, we have implemented most of them in proof-of-concept prototypes, and conducted experiments and simulations to validate our assumptions and gain new insights.
    • …
    corecore