The recent advancement in Video Instance Segmentation (VIS) has largely been
driven by the use of deeper and increasingly data-hungry transformer-based
models. However, video masks are tedious and expensive to annotate, limiting
the scale and diversity of existing VIS datasets. In this work, we aim to
remove the mask-annotation requirement. We propose MaskFreeVIS, achieving
highly competitive VIS performance, while only using bounding box annotations
for the object state. We leverage the rich temporal mask consistency
constraints in videos by introducing the Temporal KNN-patch Loss (TK-Loss),
providing strong mask supervision without any labels. Our TK-Loss finds
one-to-many matches across frames, through an efficient patch-matching step
followed by a K-nearest neighbor selection. A consistency loss is then enforced
on the found matches. Our mask-free objective is simple to implement, has no
trainable parameters, is computationally efficient, yet outperforms baselines
employing, e.g., state-of-the-art optical flow to enforce temporal mask
consistency. We validate MaskFreeVIS on the YouTube-VIS 2019/2021, OVIS and
BDD100K MOTS benchmarks. The results clearly demonstrate the efficacy of our
method by drastically narrowing the gap between fully and weakly-supervised VIS
performance. Our code and trained models are available at
https://github.com/SysCV/MaskFreeVis.Comment: Accepted in CVPR 2023; Code: https://github.com/SysCV/MaskFreeVis;
Project page: http://vis.xyz/pub/maskfreevi