153 research outputs found
Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps
This paper addresses the problem of single image depth estimation (SIDE),
focusing on improving the quality of deep neural network predictions. In a
supervised learning scenario, the quality of predictions is intrinsically
related to the training labels, which guide the optimization process. For
indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to
provide dense, albeit short-range, depth maps. On the other hand, for outdoor
scenes, LiDARs are considered the standard sensor, which comparatively provides
much sparser measurements, especially in areas further away. Rather than
modifying the neural network architecture to deal with sparse depth maps, this
article introduces a novel densification method for depth maps, using the
Hilbert Maps framework. A continuous occupancy map is produced based on 3D
points from LiDAR scans, and the resulting reconstructed surface is projected
into a 2D depth map with arbitrary resolution. Experiments conducted with
various subsets of the KITTI dataset show a significant improvement produced by
the proposed Sparse-to-Continuous technique, without the introduction of extra
information into the training stage.Comment: Accepted. (c) 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
Robust Monocular Localization of Drones by Adapting Domain Maps to Depth Prediction Inaccuracies
We present a novel monocular localization framework by jointly training deep
learning-based depth prediction and Bayesian filtering-based pose reasoning.
The proposed cross-modal framework significantly outperforms deep learning-only
predictions with respect to model scalability and tolerance to environmental
variations. Specifically, we show little-to-no degradation of pose accuracy
even with extremely poor depth estimates from a lightweight depth predictor.
Our framework also maintains high pose accuracy in extreme lighting variations
compared to standard deep learning, even without explicit domain adaptation. By
openly representing the map and intermediate feature maps (such as depth
estimates), our framework also allows for faster updates and reusing
intermediate predictions for other tasks, such as obstacle avoidance, resulting
in much higher resource efficiency
Pose Constraints for Consistent Self-supervised Monocular Depth and Ego-motion
Self-supervised monocular depth estimation approaches suffer not only from
scale ambiguity but also infer temporally inconsistent depth maps w.r.t. scale.
While disambiguating scale during training is not possible without some kind of
ground truth supervision, having scale consistent depth predictions would make
it possible to calculate scale once during inference as a post-processing step
and use it over-time. With this as a goal, a set of temporal consistency losses
that minimize pose inconsistencies over time are introduced. Evaluations show
that introducing these constraints not only reduces depth inconsistencies but
also improves the baseline performance of depth and ego-motion prediction.Comment: Scandinavian Conference on Image Analysis (SCIA) 202
Structure from Motion with Higher-level Environment Representations
Computer vision is an important area focusing on understanding,
extracting and using the information from vision-based sensor. It
has many applications such as vision-based 3D reconstruction,
simultaneous localization and mapping(SLAM) and data-driven
understanding of the real world. Vision is a fundamental sensing
modality in many different fields of application.
While the traditional structure from motion mostly uses sparse
point-based feature, this thesis aims to explore the possibility
of using higher order feature representation. It starts with a
joint work which uses straight line for feature representation
and performs bundle adjustment with straight line
parameterization. Then, we further try an even higher order
representation where we use Bezier spline for parameterization.
We start with a simple case where all contours are lying on the
plane and uses Bezier splines to parametrize the curves in the
background and optimize on both camera position and Bezier
splines. For application, we present a complete end-to-end
pipeline which produces meaningful dense 3D models from natural
data of a 3D object: the target object is placed on a structured
but unknown planar background that is modeled with splines. The
data is captured using only a hand-held monocular camera.
However, this application is limited to a planar scenario and we
manage to push the parameterizations into real 3D. Following the
potential of this idea, we introduce a more flexible higher-order
extension of points that provide a general model for structural
edges in the environment, no matter if straight or curved. Our
model relies on linked B´ezier curves, the geometric intuition
of which proves great benefits during parameter initialization
and regularization. We present the
first fully automatic pipeline that is able to generate
spline-based representations without any human supervision.
Besides a full graphical formulation of the problem, we introduce
both geometric and photometric cues as well as higher-level
concepts such overall curve visibility and viewing angle
restrictions to automatically manage the correspondences in the
graph. Results prove that curve-based structure from motion with
splines is able to outperform state-of-the-art sparse
feature-based methods, as well as to model curved edges in the
environment
Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation
Dense depth estimation is essential to scene-understanding for autonomous
driving. However, recent self-supervised approaches on monocular videos suffer
from scale-inconsistency across long sequences. Utilizing data from the
ubiquitously copresent global positioning systems (GPS), we tackle this
challenge by proposing a dynamically-weighted GPS-to-Scale (g2s) loss to
complement the appearance-based losses. We emphasize that the GPS is needed
only during the multimodal training, and not at inference. The relative
distance between frames captured through the GPS provides a scale signal that
is independent of the camera setup and scene distribution, resulting in richer
learned feature representations. Through extensive evaluation on multiple
datasets, we demonstrate scale-consistent and -aware depth estimation during
inference, improving the performance even when training with low-frequency GPS
data.Comment: Accepted at 2021 IEEE International Conference on Robotics and
Automation (ICRA
- …