In real-world applications, an object detector often encounters object
instances from new classes and needs to accommodate them effectively. Previous
work formulated this critical problem as incremental object detection (IOD),
which assumes the object instances of new classes to be fully annotated in
incremental data. However, as supervisory signals are usually rare and
expensive, the supervised IOD may not be practical for implementation. In this
work, we consider a more realistic setting named semi-supervised IOD (SSIOD),
where the object detector needs to learn new classes incrementally from a few
labelled data and massive unlabelled data without catastrophic forgetting of
old classes. A commonly-used strategy for supervised IOD is to encourage the
current model (as a student) to mimic the behavior of the old model (as a
teacher), but it generally fails in SSIOD because a dominant number of object
instances from old and new classes are coexisting and unlabelled, with the
teacher only recognizing a fraction of them. Observing that learning only the
classes of interest tends to preclude detection of other classes, we propose to
bridge the coexistence of unlabelled classes by constructing two teacher models
respectively for old and new classes, and using the concatenation of their
predictions to instruct the student. This approach is referred to as
DualTeacher, which can serve as a strong baseline for SSIOD with limited
resource overhead and no extra hyperparameters. We build various benchmarks for
SSIOD and perform extensive experiments to demonstrate the superiority of our
approach (e.g., the performance lead is up to 18.28 AP on MS-COCO). Our code is
available at \url{https://github.com/chuxiuhong/DualTeacher}