15,816 research outputs found

    Improved 2D-to-3D video conversion by fusing optical flow analysis and scene depth learning

    Get PDF
    Abstract: Automatic 2D-to-3D conversion aims to reduce the existing gap between the scarce 3D content and the incremental amount of displays that can reproduce this 3D content. Here, we present an automatic 2D-to-3D conversion algorithm that extends the functionality of the most of the existing machine learning based conversion approaches to deal with moving objects in the scene, and not only with static backgrounds. Under the assumption that images with a high similarity in color have likely a similar 3D structure, the depth of a query video sequence is inferred from a color + depth training database. First, a depth estimation for the background of each image of the query video is computed adaptively by combining the depths of the most similar images to the query ones. Then, the use of optical flow enhances the depth estimation of the different moving objects in the foreground. Promising results have been obtained in a public and widely used database

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    3D Capturing with Monoscopic Camera

    Get PDF
    This article presents a new concept of using the auto-focus function of the monoscopic camera sensor to estimate depth map information, which avoids not only using auxiliary equipment or human interaction, but also the introduced computational complexity of SfM or depth analysis. The system architecture that supports both stereo image and video data capturing, processing and display is discussed. A novel stereo image pair generation algorithm by using Z-buffer-based 3D surface recovery is proposed. Based on the depth map, we are able to calculate the disparity map (the distance in pixels between the image points in both views) for the image. The presented algorithm uses a single image with depth information (e.g. z-buffer) as an input and produces two images for left and right eye

    A Global Nearest-Neighbour Depth Estimation-based Automatic 2D-to-3D Image and Video Conversion

    Full text link
    The proposed work is to present a new method based on the radically different approach of learning the 2D-to-3D conversion from examples. It is based on lobally estimating the entire depth map of a query image directly from a repository of 3D images (image depth pairs or stereo pairs) using a nearest-neighbour regression type idea

    Learning Single-Image Depth from Videos using Quality Assessment Networks

    Full text link
    Depth estimation from a single image in the wild remains a challenging problem. One main obstacle is the lack of high-quality training data for images in the wild. In this paper we propose a method to automatically generate such data through Structure-from-Motion (SfM) on Internet videos. The core of this method is a Quality Assessment Network that identifies high-quality reconstructions obtained from SfM. Using this method, we collect single-view depth training data from a large number of YouTube videos and construct a new dataset called YouTube3D. Experiments show that YouTube3D is useful in training depth estimation networks and advances the state of the art of single-view depth estimation in the wild
    • …
    corecore