2,660 research outputs found

    Evaluation of further reduced resolution depth coding for stereoscopic 3D video

    Get PDF
    This paper presents the results and analysis of the objective and subjective quality evaluations of Further Reduced Resolution Depth Coding (FRRDC) method for stereoscopic 3D video. FRRDC is developed based on the Scalable Video Coding (SVC) reference software and the result are objectively evaluated using rate distortion curve and subjectively evaluated using LCD and auto-stereoscopic video displays. FRRDC uses the Down-Sampling and Up-Sampling (DSUS) method of the depth data of the stereoscopic 3D video. The emergence of numerous auto-stereoscopic displays in the market confirms the growth of 3DTV services. It is essential that the coding method of stereoscopic 3D videos produces high quality 3D videos on both stereoscopic displays and emerging auto-stereoscopic 3D video displays to ensure the interoperability and compatibility among all the different display devices. In this paper, the stereoscopic 3D videos are compressed using the H.264/SVC codec with Reduced Resolution Depth Coding (RRDC) and compared with H.264/SVC-FRRDC. The experimental results indicate good 3D depth perception of FRRDC on both stereoscopic and auto-stereoscopic display devices with lesser bit rates compared to H.264/SVC-RRDC

    Reduced resolution depth coding for stereoscopic 3D video

    Get PDF
    In this paper, Reduced Resolution Depth Compression (RRDC) is proposed for Scalable Video Coding (SVC) to improve the 3D video rate distortion performance. RRDC is applied by using Down-Sampling and Up-Sampling (DSUS) of the depth data of the stereoscopic 3D video. The depth data is down-sampled before SVC encoding and up-sampled after SVC decoding operation. The proposed DSUS method reduces the overall bit rates and consequently: 1) improves SVC rate distortion for 3D video, particularly at lower bit rates in error free channels; and 2) improves 3D SVC performance for 3D transmission in error prone channels. The objective quality evaluation of the stereoscopic 3D video yields higher PSNR values at low bit rates for SVCDSUS compared to the original SVC (SVC-Org), which makes it advantageous in terms of reduced storage and bandwidth requirements. Moreover, the subjective quality evaluation of the stereoscopic 3D video further confirmed that the perceived stereoscopic 3D video quality of the SVC-DSUS is very similar to the stereoscopic 3D video of the SVC-Org by up to 98.2%

    Redundancy of stereoscopic images: Experimental Evaluation

    Full text link
    With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant, which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life and test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measuring in anaglyphs and stereograms as functions of the blur degree of one of two stereo images and color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations is maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception

    Apparent sharpness of 3D video when one eye's view is more blurry.

    Get PDF
    When the images presented to each eye differ in sharpness, the fused percept remains relatively sharp. Here, we measure this effect by showing stereoscopic videos that have been blurred for one eye, or both eyes, and psychophysically determining when they appear equally sharp. For a range of blur magnitudes, the fused percept always appeared significantly sharper than the blurrier view. From these data, we investigate to what extent discarding high spatial frequencies from just one eye's view reduces the bandwidth necessary to transmit perceptually sharp 3D content. We conclude that relatively high-resolution video transmission has the most potential benefit from this method

    Wavelet based stereo images reconstruction using depth images

    Get PDF
    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance measures

    NAMA3DS1-COSPAD1: Subjective video quality assessment database on coding conditions introducing freely available high quality 3D stereoscopic sequences

    Get PDF
    Research in stereoscopic 3D coding, transmission and subjective assessment methodology depends largely on the availability of source content that can be used in cross-lab evaluations. While several studies have already been presented using proprietary content, comparisons between the studies are difficult since discrepant contents are used. Therefore in this paper, a freely available dataset of high quality Full-HD stereoscopic sequences shot with a semiprofessional 3D camera is introduced in detail. The content was designed to be suited for usage in a wide variety of applications, including high quality studies. A set of depth maps was calculated from the stereoscopic pair. As an application example, a subjective assessment has been performed using coding and spatial degradations. The Absolute Category Rating with Hidden Reference method was used. The observers were instructed to vote on video quality only. Results of this experiment are also freely available and will be presented in this paper as a first step towards objective video quality measurement for 3DTV
    corecore