1 research outputs found
Dense Object Reconstruction from RGBD Images with Embedded Deep Shape Representations
Most problems involving simultaneous localization and mapping can nowadays be
solved using one of two fundamentally different approaches. The traditional
approach is given by a least-squares objective, which minimizes many local
photometric or geometric residuals over explicitly parametrized structure and
camera parameters. Unmodeled effects violating the lambertian surface
assumption or geometric invariances of individual residuals are encountered
through statistical averaging or the addition of robust kernels and smoothness
terms. Aiming at more accurate measurement models and the inclusion of
higher-order shape priors, the community more recently shifted its attention to
deep end-to-end models for solving geometric localization and mapping problems.
However, at test-time, these feed-forward models ignore the more traditional
geometric or photometric consistency terms, thus leading to a low ability to
recover fine details and potentially complete failure in corner case scenarios.
With an application to dense object modeling from RGBD images, our work aims at
taking the best of both worlds by embedding modern higher-order object shape
priors into classical iterative residual minimization objectives. We
demonstrate a general ability to improve mapping accuracy with respect to each
modality alone, and present a successful application to real data.Comment: 12 page