Referring video object segmentation (RVOS) aims to segment the target object
in a video sequence described by a language expression. Typical multimodal
Transformer based RVOS approaches process video sequence in a frame-independent
manner to reduce the high computational cost, which however restricts the
performance due to the lack of inter-frame interaction for temporal coherence
modeling and spatio-temporal representation learning of the referred object.
Besides, the absence of sufficient cross-modal interactions results in weak
correlation between the visual and linguistic features, which increases the
difficulty of decoding the target information and limits the performance of the
model. In this paper, we propose a bidirectional correlation-driven inter-frame
interaction Transformer, dubbed BIFIT, to address these issues in RVOS.
Specifically, we design a lightweight and plug-and-play inter-frame interaction
module in the Transformer decoder to efficiently learn the spatio-temporal
features of the referred object, so as to decode the object information in the
video sequence more precisely and generate more accurate segmentation results.
Moreover, a bidirectional vision-language interaction module is implemented
before the multimodal Transformer to enhance the correlation between the visual
and linguistic features, thus facilitating the language queries to decode more
precise object information from visual features and ultimately improving the
segmentation performance. Extensive experimental results on four benchmarks
validate the superiority of our BIFIT over state-of-the-art methods and the
effectiveness of our proposed modules