9,211 research outputs found
A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration
The ability to build maps is a key functionality for the majority of mobile
robots. A central ingredient to most mapping systems is the registration or
alignment of the recorded sensor data. In this paper, we present a general
methodology for photometric registration that can deal with multiple different
cues. We provide examples for registering RGBD as well as 3D LIDAR data. In
contrast to popular point cloud registration approaches such as ICP our method
does not rely on explicit data association and exploits multiple modalities
such as raw range and image data streams. Color, depth, and normal information
are handled in an uniform manner and the registration is obtained by minimizing
the pixel-wise difference between two multi-channel images. We developed a
flexible and general framework and implemented our approach inside that
framework. We also released our implementation as open source C++ code. The
experiments show that our approach allows for an accurate registration of the
sensor data without requiring an explicit data association or model-specific
adaptations to datasets or sensors. Our approach exploits the different cues in
a natural and consistent way and the registration can be done at framerate for
a typical range or imaging sensor.Comment: 8 page
Online Object Tracking with Proposal Selection
Tracking-by-detection approaches are some of the most successful object
trackers in recent years. Their success is largely determined by the detector
model they learn initially and then update over time. However, under
challenging conditions where an object can undergo transformations, e.g.,
severe rotation, these methods are found to be lacking. In this paper, we
address this problem by formulating it as a proposal selection task and making
two contributions. The first one is introducing novel proposals estimated from
the geometric transformations undergone by the object, and building a rich
candidate set for predicting the object location. The second one is devising a
novel selection strategy using multiple cues, i.e., detection score and
edgeness score computed from state-of-the-art object edges and motion
boundaries. We extensively evaluate our approach on the visual object tracking
2014 challenge and online tracking benchmark datasets, and show the best
performance.Comment: ICCV 201
Visual-Inertial Mapping with Non-Linear Factor Recovery
Cameras and inertial measurement units are complementary sensors for
ego-motion estimation and environment mapping. Their combination makes
visual-inertial odometry (VIO) systems more accurate and robust. For globally
consistent mapping, however, combining visual and inertial information is not
straightforward. To estimate the motion and geometry with a set of images large
baselines are required. Because of that, most systems operate on keyframes that
have large time intervals between each other. Inertial data on the other hand
quickly degrades with the duration of the intervals and after several seconds
of integration, it typically contains only little useful information.
In this paper, we propose to extract relevant information for visual-inertial
mapping from visual-inertial odometry using non-linear factor recovery. We
reconstruct a set of non-linear factors that make an optimal approximation of
the information on the trajectory accumulated by VIO. To obtain a globally
consistent map we combine these factors with loop-closing constraints using
bundle adjustment. The VIO factors make the roll and pitch angles of the global
map observable, and improve the robustness and the accuracy of the mapping. In
experiments on a public benchmark, we demonstrate superior performance of our
method over the state-of-the-art approaches
- …