22,229 research outputs found

    Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric

    No full text
    Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively

    An efficient error resilience scheme based on wyner-ziv coding for region-of-Interest protection of wavelet based video transmission

    Get PDF
    In this paper, we propose a bandwidth efficient error resilience scheme for wavelet based video transmission over wireless channel by introducing an additional Wyner-Ziv (WZ) stream to protect region of interest (ROI) in a frame. In the proposed architecture, the main video stream is compressed by a generic wavelet domain coding structure and passed through the error prone channel without any protection. Meanwhile, the predefined ROI area related wavelet coefficients obtained after an integer wavelet transform will be specially protected by WZ codec in an additional channel during transmission. At the decoder side, the error-prone ROI related wavelet coefficients will be used as side information to help decoding the WZ stream. Different size of WZ bit streams can be applied in order to meet different bandwidth condition and different requirement of end users. The simulation results clearly revealed that the proposed scheme has distinct advantages in saving bandwidth comparing with fully applied FEC algorithm to whole video stream and in the meantime offer the robust transmission over error prone channel for certain video applications

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Optimal packetisation of MPEG-4 using RTP over mobile networks

    Get PDF
    The introduction of third-generation wireless networks should result in real-time mobile video communications becoming a reality. Delivery of such video is likely to be facilitated by the realtime transport protocol (RTP). Careful packetisation of the video data is necessary to ensure the optimal trade-off between channel utilisation and error robustness. Theoretical analyses for two basic schemes of MPEG-4 data encapsulation within RTP packets are presented. Simulations over a GPRS (general packet radio service) network are used to validate the analysis of the most efficient scheme. Finally, a motion adaptive system for deriving MPEG-4 video packet sizes is presented. Further simulations demonstrate the benefits of the adaptive system
    • 

    corecore