39,725 research outputs found
Indoor Depth Completion with Boundary Consistency and Self-Attention
Depth estimation features are helpful for 3D recognition. Commodity-grade
depth cameras are able to capture depth and color image in real-time. However,
glossy, transparent or distant surface cannot be scanned properly by the
sensor. As a result, enhancement and restoration from sensing depth is an
important task. Depth completion aims at filling the holes that sensors fail to
detect, which is still a complex task for machine to learn. Traditional
hand-tuned methods have reached their limits, while neural network based
methods tend to copy and interpolate the output from surrounding depth values.
This leads to blurred boundaries, and structures of the depth map are lost.
Consequently, our main work is to design an end-to-end network improving
completion depth maps while maintaining edge clarity. We utilize self-attention
mechanism, previously used in image inpainting fields, to extract more useful
information in each layer of convolution so that the complete depth map is
enhanced. In addition, we propose boundary consistency concept to enhance the
depth map quality and structure. Experimental results validate the
effectiveness of our self-attention and boundary consistency schema, which
outperforms previous state-of-the-art depth completion work on Matterport3D
dataset. Our code is publicly available at
https://github.com/patrickwu2/Depth-CompletionComment: Accepted by ICCVW (RLQ) 201
Real-time Model-based Image Color Correction for Underwater Robots
Recently, a new underwater imaging formation model presented that the
coefficients related to the direct and backscatter transmission signals are
dependent on the type of water, camera specifications, water depth, and imaging
range. This paper proposes an underwater color correction method that
integrates this new model on an underwater robot, using information from a
pressure depth sensor for water depth and a visual odometry system for
estimating scene distance. Experiments were performed with and without a color
chart over coral reefs and a shipwreck in the Caribbean. We demonstrate the
performance of our proposed method by comparing it with other statistic-,
physic-, and learning-based color correction methods. Applications for our
proposed method include improved 3D reconstruction and more robust underwater
robot navigation.Comment: Accepted at the 2019 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS
Depth Estimation via Affinity Learned with Convolutional Spatial Propagation Network
Depth estimation from a single image is a fundamental problem in computer
vision. In this paper, we propose a simple yet effective convolutional spatial
propagation network (CSPN) to learn the affinity matrix for depth prediction.
Specifically, we adopt an efficient linear propagation model, where the
propagation is performed with a manner of recurrent convolutional operation,
and the affinity among neighboring pixels is learned through a deep
convolutional neural network (CNN). We apply the designed CSPN to two depth
estimation tasks given a single image: (1) To refine the depth output from
state-of-the-art (SOTA) existing methods; and (2) to convert sparse depth
samples to a dense depth map by embedding the depth samples within the
propagation procedure. The second task is inspired by the availability of
LIDARs that provides sparse but accurate depth measurements. We experimented
the proposed CSPN over two popular benchmarks for depth estimation, i.e. NYU v2
and KITTI, where we show that our proposed approach improves in not only
quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5
times faster) than prior SOTA methods.Comment: 14 pages, 8 figures, ECCV 201
- …