The discrimination of instance embeddings plays a vital role in associating
instances across time for online video instance segmentation (VIS). Instance
embedding learning is directly supervised by the contrastive loss computed upon
the contrastive items (CIs), which are sets of anchor/positive/negative
embeddings. Recent online VIS methods leverage CIs sourced from one reference
frame only, which we argue is insufficient for learning highly discriminative
embeddings. Intuitively, a possible strategy to enhance CIs is replicating the
inference phase during training. To this end, we propose a simple yet effective
training strategy, called Consistent Training for Online VIS (CTVIS), which
devotes to aligning the training and inference pipelines in terms of building
CIs. Specifically, CTVIS constructs CIs by referring inference the
momentum-averaged embedding and the memory bank storage mechanisms, and adding
noise to the relevant embeddings. Such an extension allows a reliable
comparison between embeddings of current instances and the stable
representations of historical instances, thereby conferring an advantage in
modeling VIS challenges such as occlusion, re-identification, and deformation.
Empirically, CTVIS outstrips the SOTA VIS models by up to +5.0 points on three
VIS benchmarks, including YTVIS19 (55.1% AP), YTVIS21 (50.1% AP) and OVIS
(35.5% AP). Furthermore, we find that pseudo-videos transformed from images can
train robust models surpassing fully-supervised ones.Comment: Accepted by ICCV 2023. The code is available at
https://github.com/KainingYing/CTVI