2,327 research outputs found

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Subjective quality assessment of error concealment strategies for 3DTV in the presence of asymmetric transmission errors

    Get PDF
    International audienceThe transmission of 3DTV sequences over packet based networks may result in degradations of the video quality due to packet loss. In the conventional 2D case, several different strategies are known for extrapolating the missing information and thus concealing the error. In 3D however, the residual error after concealment of one view might leads to binocular rivalry with the correctly received second view. In this paper, three simple alternatives are presented: frame freezing, a reduced playback speed, and displaying only a single view for both eyes, thus effectively switching to 2D presentation. In a subjective experiment the performance in terms of quality of experience of the three methods is evaluated for different packet loss scenarios. Error-free encoded videos at different bit rates have been included as anchor conditions. The subjective experiment method contains special precautions for measuring the Quality of Experience (QoE) for 3D content and also contains an indicator for visual discomfort. The results indicate that switching to 2D is currently the best choice but difficulties with visual discomfort should be expected even for this method

    Comparing objective visual quality impairment detection in 2D and 3D video sequences

    Get PDF
    The skill level of teleoperator plays a key role in the telerobotic operation. However, plenty of experiments are required to evaluate the skill level in a conventional assessment. In this paper, a novel brain-based method of skill assessment is introduced, and the relationship between the teleoperator's brain states and skill level is first investigated based on a kernel canonical correlation analysis (KCCA) method. The skill of teleoperator (SoT) is defined by a statistic method using the cumulative probability function (CDF). Five indicators are extracted from the electroencephalo-graph (EEG) of the teleoperator to represent the brain states during the telerobotic operation. By using the KCCA algorithm in modeling the relationship between the SoT and the brain states, the correlation has been proved. During the telerobotic operation, the skill level of teleoperator can be well predicted through the brain states. © 2013 IEEE.Link_to_subscribed_fulltex

    Error-resilient performance of Dirac video codec over packet-erasure channel

    Get PDF
    Video transmission over the wireless or wired network requires error-resilient mechanism since compressed video bitstreams are sensitive to transmission errors because of the use of predictive coding and variable length coding. This paper investigates the performance of a simple and low complexity error-resilient coding scheme which combines source and channel coding to protect compressed bitstream of wavelet-based Dirac video codec in the packet-erasure channel. By partitioning the wavelet transform coefficients of the motion-compensated residual frame into groups and independently processing each group using arithmetic and Forward Error Correction (FEC) coding, Dirac could achieves the robustness to transmission errors by giving the video quality which is gracefully decreasing over a range of packet loss rates up to 30% when compared with conventional FEC only methods. Simulation results also show that the proposed scheme using multiple partitions can achieve up to 10 dB PSNR gain over its existing un-partitioned format. This paper also investigates the error-resilient performance of the proposed scheme in comparison with H.264 over packet-erasure channel

    Video streaming

    Get PDF
    • 

    corecore