513 research outputs found

    Integrated cosparse analysis model with explicit edge inconsistency measurement for guided depth map upsampling

    Full text link
    © 2018 SPIE and IS & T. A low-resolution depth map can be upsampled through the guidance from the registered high-resolution color image. This type of method is so-called guided depth map upsampling. Among the existing methods based on Markov random field (MRF), either data-driven or model-based prior is adopted to construct the regularization term. The data-driven prior can implicitly reveal the relation between color-depth image pair by training on external data. The model-based prior provides the anisotropic smoothness constraint guided by high-resolution color image. These types of priors can complement each other to solve the ambiguity in guided depth map upsampling. An MRF-based approach is proposed that takes both of them into account to regularize the depth map. Based on analysis sparse coding, the data-driven prior is defined by joint cosparsity on the vectors transformed from color-depth patches using the pair of learned operators. It is based on the assumption that the cosupports of such bimodal image structures computed by the operators are aligned. The edge inconsistency measurement is explicitly calculated, which is embedded into the model-based prior. It can significantly mitigate texture-copying artifacts. The experimental results on Middlebury datasets demonstrate the validity of the proposed method that outperforms seven state-of-the-art approaches

    Explicit Edge Inconsistency Evaluation Model for Color-Guided Depth Map Enhancement

    Full text link
    © 2016 IEEE. Color-guided depth enhancement is used to refine depth maps according to the assumption that the depth edges and the color edges at the corresponding locations are consistent. In methods on such low-level vision tasks, the Markov random field (MRF), including its variants, is one of the major approaches that have dominated this area for several years. However, the assumption above is not always true. To tackle the problem, the state-of-the-art solutions are to adjust the weighting coefficient inside the smoothness term of the MRF model. These methods lack an explicit evaluation model to quantitatively measure the inconsistency between the depth edge map and the color edge map, so they cannot adaptively control the efforts of the guidance from the color image for depth enhancement, leading to various defects such as texture-copy artifacts and blurring depth edges. In this paper, we propose a quantitative measurement on such inconsistency and explicitly embed it into the smoothness term. The proposed method demonstrates promising experimental results compared with the benchmark and state-of-the-art methods on the Middlebury ToF-Mark, and NYU data sets

    Patch based synthesis for single depth image super-resolution

    Get PDF
    We present an algorithm to synthetically increase the resolution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based approaches for upsampling intensity images continue to improve, this is the first exploration of patching for depth images. We match against the height field of each low resolution input depth patch, and search our database for a list of appropriate high resolution candidate patches. Selecting the right candidate at each location in the depth image is then posed as a Markov random field labeling problem. Our experiments also show how important further depth-specific processing, such as noise removal and correct patch normalization, dramatically improves our results. Perhaps surprisingly, even better results are achieved on a variety of real test scenes by providing our algorithm with only synthetic training depth data

    Deformable Kernel Networks for Joint Image Filtering

    Get PDF
    Joint image filters are used to transfer structural details from a guidance picture used as a prior to a target image, in tasks such as enhancing spatial resolution and suppressing noise. Previous methods based on convolutional neural networks (CNNs) combine nonlinear activations of spatially-invariant kernels to estimate structural details and regress the filtering result. In this paper, we instead learn explicitly sparse and spatially-variant kernels. We propose a CNN architecture and its efficient implementation, called the deformable kernel network (DKN), that outputs sets of neighbors and the corresponding weights adaptively for each pixel. The filtering result is then computed as a weighted average. We also propose a fast version of DKN that runs about seventeen times faster for an image of size 640 x 480. We demonstrate the effectiveness and flexibility of our models on the tasks of depth map upsampling, saliency map upsampling, cross-modality image restoration, texture removal, and semantic segmentation. In particular, we show that the weighted averaging process with sparsely sampled 3 x 3 kernels outperforms the state of the art by a significant margin in all cases.Comment: arXiv admin note: substantial text overlap with arXiv:1903.11286 (IJCV accepted

    Deep Color Guided Coarse-to-Fine Convolutional Network Cascade for Depth Image Super-Resolution

    Get PDF
    • …
    corecore