Instance detection (InsDet) is a long-lasting problem in robotics and
computer vision, aiming to detect object instances (predefined by some visual
examples) in a cluttered scene. Despite its practical significance, its
advancement is overshadowed by Object Detection, which aims to detect objects
belonging to some predefined classes. One major reason is that current InsDet
datasets are too small in scale by today's standards. For example, the popular
InsDet dataset GMU (published in 2016) has only 23 instances, far less than
COCO (80 classes), a well-known object detection dataset published in 2014. We
are motivated to introduce a new InsDet dataset and protocol. First, we define
a realistic setup for InsDet: training data consists of multi-view instance
captures, along with diverse scene images allowing synthesizing training images
by pasting instance images on them with free box annotations. Second, we
release a real-world database, which contains multi-view capture of 100 object
instances, and high-resolution (6k x 8k) testing images. Third, we extensively
study baseline methods for InsDet on our dataset, analyze their performance and
suggest future work. Somewhat surprisingly, using the off-the-shelf
class-agnostic segmentation model (Segment Anything Model, SAM) and the
self-supervised feature representation DINOv2 performs the best, achieving >10
AP better than end-to-end trained InsDet models that repurpose object detectors
(e.g., FasterRCNN and RetinaNet).Comment: Accepted by NeurIPS 2023, Datasets and Benchmarks Trac