22 research outputs found

    Explicit modeling on depth-color inconsistency for color-guided depth up-sampling

    Full text link
    © 2016 IEEE. Color-guided depth up-sampling is to enhance the resolution of depth map according to the assumption that the depth discontinuity and color image edge at the corresponding location are consistent. Through all methods reported, MRF including its variants is one of major approaches, which has dominated in this area for several years. However, the assumption above is not always true. Solution usually is to adjust the weighting inside smoothness term in MRF model. But there is no any method explicitly considering the inconsistency occurring between depth discontinuity and the corresponding color edge. In this paper, we propose quantitative measurement on such inconsistency and explicitly embed it into weighting value of smoothness term. Such solution has not been reported in the literature. The improved depth up-sampling based on the proposed method is evaluated on Middlebury datasets and ToFMark datasets and demonstrate promising results

    Robust temporal depth enhancement method for dynamic virtual view synthesis

    Get PDF
    Depth-image-based rendering (DIBR) is a view synthesis technique that generates virtual views by warping from the reference images based on depth maps. The quality of synthesized views highly depends on the accuracy of depth maps. However, for dynamic scenarios, depth sequences obtained through stereo matching methods frame by frame can be temporally inconsistent, especially in static regions, which leads to uncomfortable flickering artifacts in synthesized videos. This problem can be eliminated by depth enhancement methods that perform temporal filtering to suppress depth inconsistency, yet those methods may also spread depth errors. Although these depth enhancement algorithms increase the temporal consistency of synthesized videos, they have the risk of reducing the quality of rendered videos. Since conventional methods may not achieve both properties, in this paper, we present for static regions a robust temporal depth enhancement (RTDE) method, which propagates exactly the reliable depth values into succeeding frames to upgrade not only the accuracy but also the temporal consistency of depth estimations. This technique benefits the quality of synthesized videos. In addition we propose a novel evaluation metric to quantitatively compare temporal consistency between our method and the state of arts. Experimental results demonstrate the robustness of our method for dynamic virtual view synthesis, not only the temporal consistency but also the quality of synthesized videos in static regions are improved

    Integrated cosparse analysis model with explicit edge inconsistency measurement for guided depth map upsampling

    Full text link
    © 2018 SPIE and IS & T. A low-resolution depth map can be upsampled through the guidance from the registered high-resolution color image. This type of method is so-called guided depth map upsampling. Among the existing methods based on Markov random field (MRF), either data-driven or model-based prior is adopted to construct the regularization term. The data-driven prior can implicitly reveal the relation between color-depth image pair by training on external data. The model-based prior provides the anisotropic smoothness constraint guided by high-resolution color image. These types of priors can complement each other to solve the ambiguity in guided depth map upsampling. An MRF-based approach is proposed that takes both of them into account to regularize the depth map. Based on analysis sparse coding, the data-driven prior is defined by joint cosparsity on the vectors transformed from color-depth patches using the pair of learned operators. It is based on the assumption that the cosupports of such bimodal image structures computed by the operators are aligned. The edge inconsistency measurement is explicitly calculated, which is embedded into the model-based prior. It can significantly mitigate texture-copying artifacts. The experimental results on Middlebury datasets demonstrate the validity of the proposed method that outperforms seven state-of-the-art approaches

    STEREO MATCHING ALGORITHM BASED ON ILLUMINATION CONTROL TO IMPROVE THE ACCURACY

    Full text link

    Guided Filters for Depth Image Enhancement

    Get PDF
    This thesis proposes an approach utilizing guided techniques to refine depth images. Given a depth image and a color image of the same resolution, we can utilize the color image as a guide to improve the accuracy of the depth image, by smoothing out edges and removing holes as much as possible. This is done utilizing a guided filter, which solves an optimization problem relating the depth and color image to smooth and refine the depth image. These guided filters are linear-time and much faster than other state-of-the-art methods, while producing comparable results. We also integrate an existing guided inpainting model, further removing holes and improving the depth map. In this thesis, we show the application of guided filters to the depth refinement problem, utilize a guided inpainting model to fill in any holes that may arise in the depth image, as well as extend the filter out to the temporal domain to handle temporal flickering. This is done via an extension of existing optical-flow methods to compute a weighted average of the previous and next neighbors. We also demonstrate a few experimental results on real-time video to show that this method has viability in consumer depth applications. We demonstrate results on both datasets and real video to show the accuracy of our method.Ope

    A novel switching bilateral filtering algorithm for depth map

    Get PDF
    In this paper, we propose a novel switching bilateral filter for depth map from a RGB-D sensor. The switching method works as follows: the bilateral filter is applied not at all pixels of the depth map, but only in those where noise and holes are possible, that is, at the boundaries and sharp changes. With the help of computer simulation we show that the proposed algorithm can effectively and fast process a depth map. The presented results show an improvement in the accuracy of 3D object reconstruction using the proposed depth filtering. The performance of the proposed algorithm is compared in terms of the accuracy of 3D object reconstruction and speed with that of common successful depth filtering algorithms.The Russian Science Foundation (project #17-76-20045) financially supported the work

    GSWO: A Programming Model for GPU-enabled Parallelization of Sliding Window Operations in Image Processing

    Get PDF
    Sliding Window Operations (SWOs) are widely used in image processing applications. They often have to be performed repeatedly across the target image, which can demand significant computing resources when processing large images with large windows. In applications in which real-time performance is essential, running these filters on a CPU often fails to deliver results within an acceptable timeframe. The emergence of sophisticated graphic processing units (GPUs) presents an opportunity to address this challenge. However, GPU programming requires a steep learning curve and is error-prone for novices, so the availability of a tool that can produce a GPU implementation automatically from the original CPU source code can provide an attractive means by which the GPU power can be harnessed effectively. This paper presents a GPUenabled programming model, called GSWO, which can assist GPU novices by converting their SWO-based image processing applications from the original C/C++ source code to CUDA code in a highly automated manner. This model includes a new set of simple SWO pragmas to generate GPU kernels and to support effective GPU memory management. We have implemented this programming model based on a CPU-to-GPU translator (C2GPU). Evaluations have been performed on a number of typical SWO image filters and applications. The experimental results show that the GSWO model is capable of efficiently accelerating these applications, with improved applicability and a speed-up of performance compared to several leading CPU-to- GPU source-to-source translators
    corecore