172 research outputs found

    Optical Flow in Mostly Rigid Scenes

    Full text link
    The optical flow of natural scenes is a combination of the motion of the observer and the independent motion of objects. Existing algorithms typically focus on either recovering motion and structure under the assumption of a purely static world or optical flow for general unconstrained scenes. We combine these approaches in an optical flow algorithm that estimates an explicit segmentation of moving objects from appearance and physical constraints. In static regions we take advantage of strong constraints to jointly estimate the camera motion and the 3D structure of the scene over multiple frames. This allows us to also regularize the structure instead of the motion. Our formulation uses a Plane+Parallax framework, which works even under small baselines, and reduces the motion estimation to a one-dimensional search problem, resulting in more accurate estimation. In moving regions the flow is treated as unconstrained, and computed with an existing optical flow method. The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art results on both the MPI-Sintel and KITTI-2015 benchmarks.Comment: 15 pages, 10 figures; accepted for publication at CVPR 201

    Motion Segmentation by New Three-View Constraint from a Moving Camera

    Get PDF
    We propose a new method for the motion segmentation using a moving camera. The proposed method classifies each image pixel in the image sequence as the background or the motion regions by applying a novel three-view constraint called the “parallax-based multiplanar constraint.” This new three-view constraint, being the main contribution of this paper, is derived from the relative projective structure of two points in three different views and implemented within the “Plane + Parallax” framework. The parallax-based multiplanar constraint overcomes the problem of the previous geometry constraint and does not require the reference plane to be constant across multiple views. Unlike the epipolar constraint, the parallax-based multiplanar constraint modifies the surface degradation to the line degradation to detect the motion objects followed by a moving camera in the same direction. We evaluate the proposed method with several video sequences to demonstrate the effectiveness and robustness of the parallax-based multiplanar constraint

    INTERMEDIATE VIEW RECONSTRUCTION FOR MULTISCOPIC 3D DISPLAY

    Get PDF
    This thesis focuses on Intermediate View Reconstruction (IVR) which generates additional images from the available stereo images. The main application of IVR is to generate the content of multiscopic 3D displays, and it can be applied to generate different viewpoints to Free-viewpoint TV (FTV). Although IVR is considered a good approach to generate additional images, there are some problems with the reconstruction process, such as detecting and handling the occlusion areas, preserving the discontinuity at edges, and reducing image artifices through formation of the texture of the intermediate image. The occlusion area is defined as the visibility of such an area in one image and its disappearance in the other one. Solving IVR problems is considered a significant challenge for researchers. In this thesis, several novel algorithms have been specifically designed to solve IVR challenges by employing them in a highly robust intermediate view reconstruction algorithm. Computer simulation and experimental results confirm the importance of occluded areas in IVR. Therefore, we propose a novel occlusion detection algorithm and another novel algorithm to Inpaint those areas. Then, these proposed algorithms are employed in a novel occlusion-aware intermediate view reconstruction that finds an intermediate image with a given disparity between two input images. This novelty is addressed by adding occlusion awareness to the reconstruction algorithm and proposing three quality improvement techniques to reduce image artifices: filling the re-sampling holes, removing ghost contours, and handling the disocclusion area. We compared the proposed algorithms to the previously well-known algorithms on each field qualitatively and quantitatively. The obtained results show that our algorithms are superior to the previous well-known algorithms. The performance of the proposed reconstruction algorithm is tested under 13 real images and 13 synthetic images. Moreover, analysis of a human-trial experiment conducted with 21 participants confirmed that the reconstructed images from our proposed algorithm have very high quality compared with the reconstructed images from the other existing algorithms

    Motion parallax for 360° RGBD video

    Get PDF
    We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today''s most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea

    Efficient rendering for three-dimensional displays

    Get PDF
    This thesis explores more efficient methods for visualizing point data sets on three-dimensional (3D) displays. Point data sets are used in many scientific applications, e.g. cosmological simulations. Visualizing these data sets in {3D} is desirable because it can more readily reveal structure and unknown phenomena. However, cutting-edge scientific point data sets are very large and producing/rendering even a single image is expensive. Furthermore, current literature suggests that the ideal number of views for 3D (multiview) displays can be in the hundreds, which compounds the costs. The accepted notion that many views are required for {3D} displays is challenged by carrying out a novel human factor trials study. The results suggest that humans are actually surprisingly insensitive to the number of viewpoints with regard to their task performance, when occlusion in the scene is not a dominant factor. Existing stereoscopic rendering algorithms can have high set-up costs which limits their use and none are tuned for uncorrelated {3D} point rendering. This thesis shows that it is possible to improve rendering speeds for a low number of views by perspective reprojection. The novelty in the approach described lies in delaying the reprojection and generation of the viewpoints until the fragment stage of the pipeline and streamlining the rendering pipeline for points only. Theoretical analysis suggests a fragment reprojection scheme will render at least 2.8 times faster than na\"{i}vely re-rendering the scene from multiple viewpoints. Building upon the fragment reprojection technique, further rendering performance is shown to be possible (at the cost of some rendering accuracy) by restricting the amount of reprojection required according to the stereoscopic resolution of the display. A significant benefit is that the scene depth can be mapped arbitrarily to the perceived depth range of the display at no extra cost than a single region mapping approach. Using an average case-study (rendering from a 500k points for a 9-view High Definition 3D display), theoretical analysis suggests that this new approach is capable of twice the performance gains than simply reprojecting every single fragment, and quantitative measures show the algorithm to be 5 times faster than a naïve rendering approach. Further detailed quantitative results, under varying scenarios, are provided and discussed

    Analysis of MVD and color edge detection for depth maps enhacement

    Get PDF
    Prjecte final de carrera realitzat en col.laboració amb Fraunhofer Heinrich Hertz InstituteMVD (Multiview Video plus Depth) data consists of two components: color video and depth maps sequences. Depth maps represent the spatial arrangement (or three dimensional geometry) of the scene. The MVD representation is used for rendering virtual views in FVV (Free Viewpoint Video) and for 3DTV (3-dimensional TeleVision) applications. Distortions of the silhouettes of objects in the depth maps are a problem when rendering a stereo video pair. This Master thesis presents a system to improve the depth component of MVD . For this purpose, it introduces a new method called correlation histograms for analyzing the two components of depth-enhanced 3D video representations with special emphasis on the improved depth component. This document gives a description of this new method and presents an analysis of six di erent MVD data sets with di erent features. Moreover, a modular and exible system for improving depth maps is introduced. The idea behind is to use the color video component for extracting edges of the scene and to re-shape the depth component according to the edge information. The mentioned system basically describes a framework. Hence, it is capable to admit changes on speci c tasks if the concrete target is respected. After the improvement process, the MVD data is analyzed again via correlation histograms in order to obtain characteristics of the depth improvement. The achieved results show that correlation histograms are a good method for analyzing the impact of processing MVD data. It is also con rmed that the presented system is modular and exible, as it works with three di erent degrees of change, introducing modi cations in depth maps, according to the input characteristics. Hence, this system can be used as a framework for depth map improvement. The results show that contours with 1-pixel width jittering in depth maps have been correctly re-shaped. Additionally, constant background and foreground areas of depth maps have also been improved according to the degree of change, attaining better results in terms of temporal consistency. However, future work can focus on unresolved problems, such as jittering with more than one pixel width or by making the system more dynamic
    corecore