4,354 research outputs found
Neural Semantic Surface Maps
We present an automated technique for computing a map between two genus-zero
shapes, which matches semantically corresponding regions to one another. Lack
of annotated data prohibits direct inference of 3D semantic priors; instead,
current State-of-the-art methods predominantly optimize geometric properties or
require varying amounts of manual annotation. To overcome the lack of annotated
training data, we distill semantic matches from pre-trained vision models: our
method renders the pair of 3D shapes from multiple viewpoints; the resulting
renders are then fed into an off-the-shelf image-matching method which
leverages a pretrained visual model to produce feature points. This yields
semantic correspondences, which can be projected back to the 3D shapes,
producing a raw matching that is inaccurate and inconsistent between different
viewpoints. These correspondences are refined and distilled into an
inter-surface map by a dedicated optimization scheme, which promotes
bijectivity and continuity of the output map. We illustrate that our approach
can generate semantic surface-to-surface maps, eliminating manual annotations
or any 3D training data requirement. Furthermore, it proves effective in
scenarios with high semantic complexity, where objects are non-isometrically
related, as well as in situations where they are nearly isometric
- …