29 research outputs found
Semantically Guided Depth Upsampling
We present a novel method for accurate and efficient up- sampling of sparse
depth data, guided by high-resolution imagery. Our approach goes beyond the use
of intensity cues only and additionally exploits object boundary cues through
structured edge detection and semantic scene labeling for guidance. Both cues
are combined within a geodesic distance measure that allows for
boundary-preserving depth in- terpolation while utilizing local context. We
model the observed scene structure by locally planar elements and formulate the
upsampling task as a global energy minimization problem. Our method determines
glob- ally consistent solutions and preserves fine details and sharp depth
bound- aries. In our experiments on several public datasets at different levels
of application, we demonstrate superior performance of our approach over the
state-of-the-art, even for very sparse measurements.Comment: German Conference on Pattern Recognition 2016 (Oral
Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps
This paper addresses the problem of single image depth estimation (SIDE),
focusing on improving the quality of deep neural network predictions. In a
supervised learning scenario, the quality of predictions is intrinsically
related to the training labels, which guide the optimization process. For
indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to
provide dense, albeit short-range, depth maps. On the other hand, for outdoor
scenes, LiDARs are considered the standard sensor, which comparatively provides
much sparser measurements, especially in areas further away. Rather than
modifying the neural network architecture to deal with sparse depth maps, this
article introduces a novel densification method for depth maps, using the
Hilbert Maps framework. A continuous occupancy map is produced based on 3D
points from LiDAR scans, and the resulting reconstructed surface is projected
into a 2D depth map with arbitrary resolution. Experiments conducted with
various subsets of the KITTI dataset show a significant improvement produced by
the proposed Sparse-to-Continuous technique, without the introduction of extra
information into the training stage.Comment: Accepted. (c) 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution
In many computer vision applications, obtaining images of high resolution in
both the spatial and spectral domains are equally important. However, due to
hardware limitations, one can only expect to acquire images of high resolution
in either the spatial or spectral domains. This paper focuses on hyperspectral
image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low
spatial resolution (LR) but high spectral resolution is fused with a
multispectral image (MSI) with high spatial resolution (HR) but low spectral
resolution to obtain HR HSI. Existing deep learning-based solutions are all
supervised that would need a large training set and the availability of HR HSI,
which is unrealistic. Here, we make the first attempt to solving the HSI-SR
problem using an unsupervised encoder-decoder architecture that carries the
following uniquenesses. First, it is composed of two encoder-decoder networks,
coupled through a shared decoder, in order to preserve the rich spectral
information from the HSI network. Second, the network encourages the
representations from both modalities to follow a sparse Dirichlet distribution
which naturally incorporates the two physical constraints of HSI and MSI.
Third, the angular difference between representations are minimized in order to
reduce the spectral distortion. We refer to the proposed architecture as
unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results
demonstrate the superior performance of uSDN as compared to the
state-of-the-art.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2018, Spotlight
Estimating Depth from RGB and Sparse Sensing
We present a deep model that can accurately produce dense depth maps given an
RGB image with known depth at a very sparse set of pixels. The model works
simultaneously for both indoor/outdoor scenes and produces state-of-the-art
dense depth maps at nearly real-time speeds on both the NYUv2 and KITTI
datasets. We surpass the state-of-the-art for monocular depth estimation even
with depth values for only 1 out of every ~10000 image pixels, and we
outperform other sparse-to-dense depth methods at all sparsity levels. With
depth values for 1/256 of the image pixels, we achieve a mean absolute error of
less than 1% of actual depth on indoor scenes, comparable to the performance of
consumer-grade depth sensor hardware. Our experiments demonstrate that it would
indeed be possible to efficiently transform sparse depth measurements obtained
using e.g. lower-power depth sensors or SLAM systems into high-quality dense
depth maps.Comment: European Conference on Computer Vision (ECCV) 2018. Updated to
camera-ready version with additional experiment
Deep Depth Completion of a Single RGB-D Image
The goal of our work is to complete the depth channel of an RGB-D image.
Commodity-grade depth cameras often fail to sense depth for shiny, bright,
transparent, and distant surfaces. To address this problem, we train a deep
network that takes an RGB image as input and predicts dense surface normals and
occlusion boundaries. Those predictions are then combined with raw depth
observations provided by the RGB-D camera to solve for depths for all pixels,
including those missing in the original observation. This method was chosen
over others (e.g., inpainting depths directly) as the result of extensive
experiments with a new depth completion benchmark dataset, where holes are
filled in training data through the rendering of surface reconstructions
created from multiview RGB-D scans. Experiments with different network inputs,
depth representations, loss functions, optimization methods, inpainting
methods, and deep depth estimation networks show that our proposed approach
provides better depth completions than these alternatives.Comment: Accepted by CVPR2018 (Spotlight). Project webpage:
http://deepcompletion.cs.princeton.edu/ This version includes supplementary
materials which provide more implementation details, quantitative evaluation,
and qualitative results. Due to file size limit, please check project website
for high-res pape