Recent work in the construction of 3D scene graphs has enabled mobile robots
to build large-scale hybrid metric-semantic hierarchical representations of the
world. These detailed models contain information that is useful for planning,
however how to derive a planning domain from a 3D scene graph that enables
efficient computation of executable plans is an open question. In this work, we
present a novel approach for defining and solving Task and Motion Planning
problems in large-scale environments using hierarchical 3D scene graphs. We
identify a method for building sparse problem domains which enable scaling to
large scenes, and propose a technique for incrementally adding objects to that
domain during planning time to avoid wasting computation on irrelevant elements
of the scene graph. We test our approach in two hand crafted domains as well as
two scene graphs built from perception, including one constructed from the
KITTI dataset. A video supplement is available at https://youtu.be/63xuCCaN0I4