6 research outputs found

    CTVIS: Consistent Training for Online Video Instance Segmentation

    Full text link
    The discrimination of instance embeddings plays a vital role in associating instances across time for online video instance segmentation (VIS). Instance embedding learning is directly supervised by the contrastive loss computed upon the contrastive items (CIs), which are sets of anchor/positive/negative embeddings. Recent online VIS methods leverage CIs sourced from one reference frame only, which we argue is insufficient for learning highly discriminative embeddings. Intuitively, a possible strategy to enhance CIs is replicating the inference phase during training. To this end, we propose a simple yet effective training strategy, called Consistent Training for Online VIS (CTVIS), which devotes to aligning the training and inference pipelines in terms of building CIs. Specifically, CTVIS constructs CIs by referring inference the momentum-averaged embedding and the memory bank storage mechanisms, and adding noise to the relevant embeddings. Such an extension allows a reliable comparison between embeddings of current instances and the stable representations of historical instances, thereby conferring an advantage in modeling VIS challenges such as occlusion, re-identification, and deformation. Empirically, CTVIS outstrips the SOTA VIS models by up to +5.0 points on three VIS benchmarks, including YTVIS19 (55.1% AP), YTVIS21 (50.1% AP) and OVIS (35.5% AP). Furthermore, we find that pseudo-videos transformed from images can train robust models surpassing fully-supervised ones.Comment: Accepted by ICCV 2023. The code is available at https://github.com/KainingYing/CTVI

    Label-Efficient Segmentation for Diverse Scenarios

    Get PDF
    Segmentation, a fundamentally important task in computer vision, aims to partition an image into multiple distinct and meaningful regions or segments. In this thesis, we analyze the importance label-efficient segmentation techniques and provide a series of methods to address segmentation tasks in different scenarios. First, we propose a Deep Reasoning Network for few-shot semantic segmentation, termed as DRNet, which is a novel approach that relies on dynamic convolutions to segment objects of new categories. Unlike previous works that directly apply convolutional layers to integrated features to predict segmentation masks, our DRNet generates learnable parameters for predicting layers based on query features, allowing for greater flexibility and adaptability. Second, we conduct further experiments and propose mining both dynamic and regional context, termed as DRCNet, for few-shot semantic segmentation. Specifically, we introduce a Dynamic Context Module to capture spatial details in the query images, and a Regional Context Module to model the prototypes for ambiguous regions while excluding background and ambiguous objects in query images. The superior performance of our method is demonstrated on various benchmarks. Third, we address the unsupervised video object segmentation task by learning both motion and temporal cues, in a method termed as MTNet. The proposed MTNet integrates appearance and motion information through a Bi-modal Feature Fusion Module and models the relations between adjacent frames using a Mixed Temporal Transformer. Achieving state-of-the-art results on multiple datasets while maintaining a much faster inference speed. Finally, we propose a semi-supervised learning method for bird’s-eye-view (BEV) semantic segmentation, which represents the first attempt at performing label-efficient learning in this field.Without any whistles-and-bells, our proposed BEV-S4 can achieve results on par with fully-supervised methods while requiring significantly fewer labels. We hope that our approach could serve as a strong baseline and potentially attract more attention to learning BEV perception with fewer labels.Thesis (Ph.D.) -- University of Adelaide, School of Computer and Mathematical Sciences, 202

    Boundary-Guided Feature Aggregation Network for Salient Object Detection

    No full text
    corecore