738 research outputs found

    Depth Superresolution using Motion Adaptive Regularization

    Full text link
    Spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. Recent work has explored the idea of improving the resolution of depth using higher resolution intensity as a side information. In this paper, we demonstrate that further incorporating temporal information in videos can significantly improve the results. In particular, we propose a novel approach that improves depth resolution, exploiting the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. Experiments confirm that the proposed approach substantially improves the quality of the estimated high-resolution depth. Our approach can be a first component in systems using vision techniques that rely on high resolution depth information

    Integrated cosparse analysis model with explicit edge inconsistency measurement for guided depth map upsampling

    Full text link
    © 2018 SPIE and IS & T. A low-resolution depth map can be upsampled through the guidance from the registered high-resolution color image. This type of method is so-called guided depth map upsampling. Among the existing methods based on Markov random field (MRF), either data-driven or model-based prior is adopted to construct the regularization term. The data-driven prior can implicitly reveal the relation between color-depth image pair by training on external data. The model-based prior provides the anisotropic smoothness constraint guided by high-resolution color image. These types of priors can complement each other to solve the ambiguity in guided depth map upsampling. An MRF-based approach is proposed that takes both of them into account to regularize the depth map. Based on analysis sparse coding, the data-driven prior is defined by joint cosparsity on the vectors transformed from color-depth patches using the pair of learned operators. It is based on the assumption that the cosupports of such bimodal image structures computed by the operators are aligned. The edge inconsistency measurement is explicitly calculated, which is embedded into the model-based prior. It can significantly mitigate texture-copying artifacts. The experimental results on Middlebury datasets demonstrate the validity of the proposed method that outperforms seven state-of-the-art approaches

    Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging

    Full text link
    Depth sensing is useful in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with minimal latency. However, for many battery-powered devices, the illumination source of a TOF camera is power hungry and can limit the battery life of the device. To address this issue, we present an algorithm that lowers the power for depth sensing by reducing the usage of the TOF camera and estimating depth maps using concurrently collected images. Our technique also adaptively controls the TOF camera and enables it when an accurate depth map cannot be estimated. To ensure that the overall system power for depth sensing is reduced, we design our algorithm to run on a low power embedded platform, where it outputs 640x480 depth maps at 30 frames per second. We evaluate our approach on several RGB-D datasets, where it produces depth maps with an overall mean relative error of 0.96% and reduces the usage of the TOF camera by 85%. When used with commercial TOF cameras, we estimate that our algorithm can lower the total power for depth sensing by up to 73%
    • …
    corecore