5,113 research outputs found
Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction pipeline
In this paper, we show how absolute orientation measurements provided by
low-cost but high-fidelity IMU sensors can be integrated into the KinectFusion
pipeline. We show that integration improves both runtime, robustness and
quality of the 3D reconstruction. In particular, we use this orientation data
to seed and regularize the ICP registration technique. We also present a
technique to filter the pairs of 3D matched points based on the distribution of
their distances. This filter is implemented efficiently on the GPU. Estimating
the distribution of the distances helps control the number of iterations
necessary for the convergence of the ICP algorithm. Finally, we show
experimental results that highlight improvements in robustness, a speed-up of
almost 12%, and a gain in tracking quality of 53% for the ATE metric on the
Freiburg benchmark.Comment: CVPR Workshop on Visual Odometry and Computer Vision Applications
Based on Location Clues 201
Active Collaborative Localization in Heterogeneous Robot Teams
Accurate and robust state estimation is critical for autonomous navigation of
robot teams. This task is especially challenging for large groups of size,
weight, and power (SWAP) constrained aerial robots operating in
perceptually-degraded GPS-denied environments. We can, however, actively
increase the amount of perceptual information available to such robots by
augmenting them with a small number of more expensive, but less
resource-constrained, agents. Specifically, the latter can serve as sources of
perceptual information themselves. In this paper, we study the problem of
optimally positioning (and potentially navigating) a small number of more
capable agents to enhance the perceptual environment for their
lightweight,inexpensive, teammates that only need to rely on cameras and IMUs.
We propose a numerically robust, computationally efficient approach to solve
this problem via nonlinear optimization. Our method outperforms the standard
approach based on the greedy algorithm, while matching the accuracy of a
heuristic evolutionary scheme for global optimization at a fraction of its
running time. Ultimately, we validate our solution in both photorealistic
simulations and real-world experiments. In these experiments, we use
lidar-based autonomous ground vehicles as the more capable agents, and
vision-based aerial robots as their SWAP-constrained teammates. Our method is
able to reduce drift in visual-inertial odometry by as much as 90%, and it
outperforms random positioning of lidar-equipped agents by a significant
margin. Furthermore, our method can be generalized to different types of robot
teams with heterogeneous perception capabilities. It has a wide range of
applications, such as surveying and mapping challenging dynamic environments,
and enabling resilience to large-scale perturbations that can be caused by
earthquakes or storms.Comment: To appear in Robotics: Science and Systems (RSS) 202
Dense RGB-D-Inertial SLAM with Map Deformations
While dense visual SLAM methods are capable of estimating dense
reconstructions of the environment, they suffer from a lack of robustness in
their tracking step, especially when the optimisation is poorly initialised.
Sparse visual SLAM systems have attained high levels of accuracy and robustness
through the inclusion of inertial measurements in a tightly-coupled fusion.
Inspired by this performance, we propose the first tightly-coupled dense
RGB-D-inertial SLAM system.
Our system has real-time capability while running on a GPU. It jointly
optimises for the camera pose, velocity, IMU biases and gravity direction while
building up a globally consistent, fully dense surfel-based 3D reconstruction
of the environment. Through a series of experiments on both synthetic and real
world datasets, we show that our dense visual-inertial SLAM system is more
robust to fast motions and periods of low texture and low geometric variation
than a related RGB-D-only SLAM system.Comment: Accepted at IROS 2017; supplementary video available at
https://youtu.be/-gUdQ0cxDh
- …