801 research outputs found

    Modeling and Evaluating Feedback-Based Error Control for Video Transfer

    Get PDF
    Packet loss can be detrimental to real-time interactive video over lossy networks because one lost video packet can propagate errors to many subsequent video frames due to the encoding dependency between frames. Feedback-based error control techniques use feedback information from the decoder to adjust coding parameters at the encoder or retransmit lost packets to reduce the error propagation due to data loss. Feedback-based error control techniques have been shown to be more effective than trying to conceal the error at the encoder or decoder alone since they allow the encoder and decoder to cooperate in the error control process. However, there has been no systematic exploration of the impact of video content and network conditions on the performance of feedback-based error control techniques. In particular, the impact of packet loss, round-trip delay, network capacity constraint, video motion and reference distance on the quality of videos using feedback-based error control techniques have not been systematically studied. This thesis presents analytical models for the major feedback-based error control techniques: Retransmission, Reference Picture Selection (both NACK and ACK modes) and Intra Update. These feedback-based error control techniques have been included in H.263/H.264 and MPEG4, the state of the art video in compression standards. Given a round-trip time, packet loss rate, network capacity constraint, our models can predict the quality for a streaming video with retransmission, Intra Update and RPS over a lossy network. In order to exploit our analytical models, a series of studies has been conducted to explore the effect of reference distance, capacity constraint and Intra coding on video quality. The accuracy of our analytical models in predicting the video quality under different network conditions is validated through simulations. These models are used to examine the behavior of feedback-based error control schemes under a variety of network conditions and video content through a series of analytic experiments. Analysis shows that the performance of feedback-based error control techniques is affected by a variety of factors including round-trip time, loss rate, video content and the Group of Pictures (GOP) length. In particular: 1) RPS NACK achieves the best performance when loss rate is low while RPS ACK outperforms other repair techniques when loss rate is high. However RPS ACK performs the worst when loss rate is low. Retransmission performs the worst when the loss rate is high; 2) for a given round-trip time, the loss rate where RPS NACK performs worse than RPS ACK is higher for low motion videos than it is for high motion videos; 3) Videos with RPS NACK always perform the same or better than videos without repair. However, when small GOP sizes are used, videos without repair perform better than videos with RPS ACK; 4) RPS NACK outperform Intra Update for low-motion videos. However, the performance gap between RPS NACK and Intra Update drops when the round-trip time or the intensity of video motion increases. 5) Although the above trends hold for both VQM and PSNR, when VQM is the video quality metric the performance results are much more sensitive to network loss. 6) Retransmission is effective only when the round-trip time is low. When the round-trip time is high, Partial Retransmission achieves almost the same performance as Full Retransmission. These insights derived from our models can help determine appropriate choices for feedback-based error control techniques under various network conditions and video content

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods

    GRACE: Loss-Resilient Real-Time Video through Neural Codecs

    Full text link
    In real-time video communication, retransmitting lost packets over high-latency networks is not viable due to strict latency requirements. To counter packet losses without retransmission, two primary strategies are employed -- encoder-based forward error correction (FEC) and decoder-based error concealment. The former encodes data with redundancy before transmission, yet determining the optimal redundancy level in advance proves challenging. The latter reconstructs video from partially received frames, but dividing a frame into independently coded partitions inherently compromises compression efficiency, and the lost information cannot be effectively recovered by the decoder without adapting the encoder. We present a loss-resilient real-time video system called GRACE, which preserves the user's quality of experience (QoE) across a wide range of packet losses through a new neural video codec. Central to GRACE's enhanced loss resilience is its joint training of the neural encoder and decoder under a spectrum of simulated packet losses. In lossless scenarios, GRACE achieves video quality on par with conventional codecs (e.g., H.265). As the loss rate escalates, GRACE exhibits a more graceful, less pronounced decline in quality, consistently outperforming other loss-resilient schemes. Through extensive evaluation on various videos and real network traces, we demonstrate that GRACE reduces undecodable frames by 95% and stall duration by 90% compared with FEC, while markedly boosting video quality over error concealment methods. In a user study with 240 crowdsourced participants and 960 subjective ratings, GRACE registers a 38% higher mean opinion score (MOS) than other baselines

    No reference quality assessment for MPEG video delivery over IP

    Get PDF

    A New H.264/AVC Error Resilience Model Based on Regions of Interest

    Get PDF
    International audienceVideo transmission over the Internet can sometimes be subject to packet loss which reduces the end-user's Quality of Experience (QoE). Solutions aiming at improving the robustness of a video bitstream can be used to subdue this problem. In this paper, we propose a new Region of Interest-based error resilience model to protect the most important part of the picture from distortions. We conduct eye tracking tests in order to collect the Region of Interest (RoI) data. Then, we apply in the encoder an intra-prediction restriction algorithm to the macroblocks belonging to the RoI. Results show that while no significant overhead is noted, the perceived quality of the video's RoI, measured by means of a perceptual video quality metric, increases in the presence of packet loss compared to the traditional encoding approach

    Cross-layer Optimized Wireless Video Surveillance

    Get PDF
    A wireless video surveillance system contains three major components, the video capture and preprocessing, the video compression and transmission over wireless sensor networks (WSNs), and the video analysis at the receiving end. The coordination of different components is important for improving the end-to-end video quality, especially under the communication resource constraint. Cross-layer control proves to be an efficient measure for optimal system configuration. In this dissertation, we address the problem of implementing cross-layer optimization in the wireless video surveillance system. The thesis work is based on three research projects. In the first project, a single PTU (pan-tilt-unit) camera is used for video object tracking. The problem studied is how to improve the quality of the received video by jointly considering the coding and transmission process. The cross-layer controller determines the optimal coding and transmission parameters, according to the dynamic channel condition and the transmission delay. Multiple error concealment strategies are developed utilizing the special property of the PTU camera motion. In the second project, the binocular PTU camera is adopted for video object tracking. The presented work studied the fast disparity estimation algorithm and the 3D video transcoding over the WSN for real-time applications. The disparity/depth information is estimated in a coarse-to-fine manner using both local and global methods. The transcoding is coordinated by the cross-layer controller based on the channel condition and the data rate constraint, in order to achieve the best view synthesis quality. The third project is applied for multi-camera motion capture in remote healthcare monitoring. The challenge is the resource allocation for multiple video sequences. The presented cross-layer design incorporates the delay sensitive, content-aware video coding and transmission, and the adaptive video coding and transmission to ensure the optimal and balanced quality for the multi-view videos. In these projects, interdisciplinary study is conducted to synergize the surveillance system under the cross-layer optimization framework. Experimental results demonstrate the efficiency of the proposed schemes. The challenges of cross-layer design in existing wireless video surveillance systems are also analyzed to enlighten the future work. Adviser: Song C
    corecore