Current state-of-the-art methods for panoptic segmentation require an immense
amount of annotated training data that is both arduous and expensive to obtain
posing a significant challenge for their widespread adoption. Concurrently,
recent breakthroughs in visual representation learning have sparked a paradigm
shift leading to the advent of large foundation models that can be trained with
completely unlabeled images. In this work, we propose to leverage such
task-agnostic image features to enable few-shot panoptic segmentation by
presenting Segmenting Panoptic Information with Nearly 0 labels (SPINO). In
detail, our method combines a DINOv2 backbone with lightweight network heads
for semantic segmentation and boundary estimation. We show that our approach,
albeit being trained with only ten annotated images, predicts high-quality
pseudo-labels that can be used with any existing panoptic segmentation method.
Notably, we demonstrate that SPINO achieves competitive results compared to
fully supervised baselines while using less than 0.3% of the ground truth
labels, paving the way for learning complex visual recognition tasks leveraging
foundation models. To illustrate its general applicability, we further deploy
SPINO on real-world robotic vision systems for both outdoor and indoor
environments. To foster future research, we make the code and trained models
publicly available at http://spino.cs.uni-freiburg.de