34 research outputs found
Multi-Model 3D Registration: Finding Multiple Moving Objects in Cluttered Point Clouds
We investigate a variation of the 3D registration problem, named multi-model
3D registration. In the multi-model registration problem, we are given two
point clouds picturing a set of objects at different poses (and possibly
including points belonging to the background) and we want to simultaneously
reconstruct how all objects moved between the two point clouds. This setup
generalizes standard 3D registration where one wants to reconstruct a single
pose, e.g., the motion of the sensor picturing a static scene. Moreover, it
provides a mathematically grounded formulation for relevant robotics
applications, e.g., where a depth sensor onboard a robot perceives a dynamic
scene and has the goal of estimating its own motion (from the static portion of
the scene) while simultaneously recovering the motion of all dynamic objects.
We assume a correspondence-based setup where we have putative matches between
the two point clouds and consider the practical case where these
correspondences are plagued with outliers. We then propose a simple approach
based on Expectation-Maximization (EM) and establish theoretical conditions
under which the EM approach converges to the ground truth. We evaluate the
approach in simulated and real datasets ranging from table-top scenes to
self-driving scenarios and demonstrate its effectiveness when combined with
state-of-the-art scene flow methods to establish dense correspondences.Comment: 8 pages, Accepted by ICRA 202
Semantic mapping for service robots: building and using maps for mobile manipulators in semi-structured environments
Although much progress has been made in the field of robotic mapping, many challenges remain including: efficient semantic segmentation using RGB-D sensors, map representations that include complex features (structures and objects), and interfaces for interactive annotation of maps. This thesis addresses how prior knowledge of semi-structured human environments can be leveraged to improve segmentation, mapping, and semantic annotation of maps. We present an organized connected component approach for segmenting RGB-D data into planes and clusters. These segments serve as input to our mapping approach that utilizes them as planar landmarks and object landmarks for Simultaneous Localization and Mapping (SLAM), providing necessary information for service robot tasks and improving data association and loop closure. These features are meaningful to humans, enabling annotation of mapped features to establish common ground and simplifying tasking. A modular, open-source software framework, the OmniMapper, is also presented that allows a number of different sensors and features to be combined to generate a combined map representation, and enabling easy addition of new feature types.Ph.D