2,358 research outputs found

    Robust temporal depth enhancement method for dynamic virtual view synthesis

    Get PDF
    Depth-image-based rendering (DIBR) is a view synthesis technique that generates virtual views by warping from the reference images based on depth maps. The quality of synthesized views highly depends on the accuracy of depth maps. However, for dynamic scenarios, depth sequences obtained through stereo matching methods frame by frame can be temporally inconsistent, especially in static regions, which leads to uncomfortable flickering artifacts in synthesized videos. This problem can be eliminated by depth enhancement methods that perform temporal filtering to suppress depth inconsistency, yet those methods may also spread depth errors. Although these depth enhancement algorithms increase the temporal consistency of synthesized videos, they have the risk of reducing the quality of rendered videos. Since conventional methods may not achieve both properties, in this paper, we present for static regions a robust temporal depth enhancement (RTDE) method, which propagates exactly the reliable depth values into succeeding frames to upgrade not only the accuracy but also the temporal consistency of depth estimations. This technique benefits the quality of synthesized videos. In addition we propose a novel evaluation metric to quantitatively compare temporal consistency between our method and the state of arts. Experimental results demonstrate the robustness of our method for dynamic virtual view synthesis, not only the temporal consistency but also the quality of synthesized videos in static regions are improved

    Segmentation-based Method of Increasing The Depth Maps Temporal Consistency

    Get PDF
    In this paper, a modification of the graph-based depth estimation is presented. The purpose of proposed modification is to increase the quality of estimated depth maps, reduce the time of the estimation, and increase the temporal consistency of depth maps. The modification is based on the image segmentation using superpixels, therefore in the first step of the proposed modification a segmentation of previous frames is used in the currently processed frame in order to reduce the overall time of the depth estimation. In the next step, a depth map from the previous frame is used in the depth map optimization as the initial values of a depth map estimated for the current frame. It results in the better representation of silhouettes of objects in depth maps and in the reduced computational complexity of the depth estimation process. In order to evaluate the performance of the proposed modification the authors performed the experiment for a set of multiview test sequences that varied in their content and an arrangement of cameras. The results of the experiments confirmed the increase of the depth maps quality — the quality of depth maps calculated with the proposed modification is higher than for the unmodified depth estimation method, apart from the number of the performed optimization cycles. Therefore, use of the proposed modification allows to estimate a depth of the better quality with almost 40% reduction of the estimation time. Moreover, the temporal consistency, measured through the reduction of the bitrate of encoded virtual views, was also considerably increased

    Cuboid-maps for indoor illumination modeling and augmented reality rendering

    Get PDF
    This thesis proposes a novel approach for indoor scene illumination modeling and augmented reality rendering. Our key observation is that an indoor scene is well represented by a set of rectangular spaces, where important illuminants reside on their boundary faces, such as a window on a wall or a ceiling light. Given a perspective image or a panorama and detected rectangular spaces as inputs, we estimate their cuboid shapes, and infer illumination components for each face of the cuboids by a simple convolutional neural architecture. The process turns an image into a set of cuboid environment maps, each of which is a simple extension of a traditional cube-map. For augmented reality rendering, we simply take a linear combination of inferred environment maps and an input image, producing surprisingly realistic illumination effects. This approach is simple and efficient, avoids flickering, and achieves quantitatively more accurate and qualitatively more realistic effects than competing substantially more complicated systems

    Computer Vision Based 3D Reconstruction : A Review

    Get PDF
    3D reconstruction are used in many fields starts from the object reconstruction such as site, and cultural artifacts in both ground and under the sea levels. The scientist are beneficial for these task in order to learn and keep the environment into 3D data due to the extinction. In this paper explained vision setup that is commonly used such as single camera, stereo camera, Kinect / Structured Light/ Time of Flight camera and fusion approach. The prior works also explained how the 3D reconstruction perform in many fields and using various algorithms

    On Robust Cross-View Consistency in Self-Supervised Monocular Depth Estimation

    Full text link
    Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a Depth Feature Alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a Voxel Density Alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the "point-to-point" alignment paradigm to the "region-to-region" one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth
    corecore