2,597 research outputs found
Dense Piecewise Planar RGB-D SLAM for Indoor Environments
The paper exploits weak Manhattan constraints to parse the structure of
indoor environments from RGB-D video sequences in an online setting. We extend
the previous approach for single view parsing of indoor scenes to video
sequences and formulate the problem of recovering the floor plan of the
environment as an optimal labeling problem solved using dynamic programming.
The temporal continuity is enforced in a recursive setting, where labeling from
previous frames is used as a prior term in the objective function. In addition
to recovery of piecewise planar weak Manhattan structure of the extended
environment, the orthogonality constraints are also exploited by visual
odometry and pose graph optimization. This yields reliable estimates in the
presence of large motions and absence of distinctive features to track. We
evaluate our method on several challenging indoors sequences demonstrating
accurate SLAM and dense mapping of low texture environments. On existing TUM
benchmark we achieve competitive results with the alternative approaches which
fail in our environments.Comment: International Conference on Intelligent Robots and Systems (IROS)
201
Plane extraction for indoor place recognition
In this paper, we present an image based plane extraction
method well suited for real-time operations. Our approach exploits the
assumption that the surrounding scene is mainly composed by planes
disposed in known directions. Planes are detected from a single image
exploiting a voting scheme that takes into account the vanishing lines.
Then, candidate planes are validated and merged using a region grow-
ing based approach to detect in real-time planes inside an unknown in-
door environment. Using the related plane homographies is possible to
remove the perspective distortion, enabling standard place recognition
algorithms to work in an invariant point of view setup. Quantitative Ex-
periments performed with real world images show the effectiveness of our
approach compared with a very popular method
Semi-Global Stereo Matching with Surface Orientation Priors
Semi-Global Matching (SGM) is a widely-used efficient stereo matching
technique. It works well for textured scenes, but fails on untextured slanted
surfaces due to its fronto-parallel smoothness assumption. To remedy this
problem, we propose a simple extension, termed SGM-P, to utilize precomputed
surface orientation priors. Such priors favor different surface slants in
different 2D image regions or 3D scene regions and can be derived in various
ways. In this paper we evaluate plane orientation priors derived from stereo
matching at a coarser resolution and show that such priors can yield
significant performance gains for difficult weakly-textured scenes. We also
explore surface normal priors derived from Manhattan-world assumptions, and we
analyze the potential performance gains using oracle priors derived from
ground-truth data. SGM-P only adds a minor computational overhead to SGM and is
an attractive alternative to more complex methods employing higher-order
smoothness terms.Comment: extended draft of 3DV 2017 (spotlight) pape
Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments
Existing simultaneous localization and mapping (SLAM) algorithms are not
robust in challenging low-texture environments because there are only few
salient features. The resulting sparse or semi-dense map also conveys little
information for motion planning. Though some work utilize plane or scene layout
for dense map regularization, they require decent state estimation from other
sources. In this paper, we propose real-time monocular plane SLAM to
demonstrate that scene understanding could improve both state estimation and
dense mapping especially in low-texture environments. The plane measurements
come from a pop-up 3D plane model applied to each single image. We also combine
planes with point based SLAM to improve robustness. On a public TUM dataset,
our algorithm generates a dense semantic 3D model with pixel depth error of 6.2
cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our
method creates a much better 3D model with state estimation error of 0.67%.Comment: International Conference on Intelligent Robots and Systems (IROS)
201
- …