2,617 research outputs found

    DMT: Dynamic Mutual Training for Semi-Supervised Learning

    Full text link
    Recent semi-supervised learning methods use pseudo supervision as core idea, especially self-training methods that generate pseudo labels. However, pseudo labels are unreliable. Self-training methods usually rely on single model prediction confidence to filter low-confidence pseudo labels, thus remaining high-confidence errors and wasting many low-confidence correct labels. In this paper, we point out it is difficult for a model to counter its own errors. Instead, leveraging inter-model disagreement between different models is a key to locate pseudo label errors. With this new viewpoint, we propose mutual training between two different models by a dynamically re-weighted loss function, called Dynamic Mutual Training (DMT). We quantify inter-model disagreement by comparing predictions from two different models to dynamically re-weight loss in training, where a larger disagreement indicates a possible error and corresponds to a lower loss value. Extensive experiments show that DMT achieves state-of-the-art performance in both image classification and semantic segmentation. Our codes are released at https://github.com/voldemortX/DST-CBC .Comment: Reformatte

    Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions

    Full text link
    Due to the scarcity of dense pixel-level semantic annotations for images recorded in adverse visual conditions, there has been a keen interest in unsupervised domain adaptation (UDA) for the semantic segmentation of such images. UDA adapts models trained on normal conditions to the target adverse-condition domains. Meanwhile, multiple datasets with driving scenes provide corresponding images of the same scenes across multiple conditions, which can serve as a form of weak supervision for domain adaptation. We propose Refign, a generic extension to self-training-based UDA methods which leverages these cross-domain correspondences. Refign consists of two steps: (1) aligning the normal-condition image to the corresponding adverse-condition image using an uncertainty-aware dense matching network, and (2) refining the adverse prediction with the normal prediction using an adaptive label correction mechanism. We design custom modules to streamline both steps and set the new state of the art for domain-adaptive semantic segmentation on several adverse-condition benchmarks, including ACDC and Dark Zurich. The approach introduces no extra training parameters, minimal computational overhead -- during training only -- and can be used as a drop-in extension to improve any given self-training-based UDA method. Code is available at https://github.com/brdav/refign.Comment: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 202

    Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

    Get PDF
    This work presents a novel approach for semi-supervised semantic segmentation. The key element of this approach is our contrastive learning module that enforces the segmentation network to yield similar pixel-level feature representations for same-class samples across the whole dataset. To achieve this, we maintain a memory bank continuously updated with relevant and high-quality feature vectors from labeled data. In an end-to-end training, the features from both labeled and unlabeled data are optimized to be similar to same-class samples from the memory bank. Our approach outperforms the current state-of-the-art for semi-supervised semantic segmentation and semi-supervised domain adaptation on well-known public benchmarks, with larger improvements on the most challenging scenarios, i.e., less available labeled data

    Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models

    Get PDF
    Semantic image segmentation is a central and challenging task in autonomous driving, addressed by training deep models. Since this training draws to a curse of human-based image labeling, using synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies to address an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic segmentation models. It consists of a self-training stage, which provides two domain-adapted models, and a model collaboration loop for the mutual improvement of these two models. These models are then used to provide the final semantic segmentation labels (pseudo-labels) for the real-world images. The overall procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for on-board semantic segmentation. Our procedure shows improvements ranging from ~13 to ~26 mIoU points over baselines, so establishing new state-of-the-art results

    FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning

    Get PDF
    Pseudo labeling and consistency regularization approaches based on confidencethresholding have made great progress in semi-supervised learning (SSL).However, we argue that existing methods might fail to adopt suitable thresholdssince they either use a pre-defined / fixed threshold or an ad-hoc thresholdadjusting scheme, resulting in inferior performance and slow convergence. Wefirst analyze a motivating example to achieve some intuitions on therelationship between the desirable threshold and model's learning status. Basedon the analysis, we hence propose FreeMatch to define and adjust the confidencethreshold in a self-adaptive manner according to the model's learning status.We further introduce a self-adaptive class fairness regularization penalty thatencourages the model to produce diverse predictions during the early stages oftraining. Extensive experimental results indicate the superiority of FreeMatchespecially when the labeled data are extremely rare. FreeMatch achieves 5.78%,13.59%, and 1.28% error rate reduction over the latest state-of-the-art methodFlexMatch on CIFAR-10 with 1 label per class, STL-10 with 4 labels per class,and ImageNet with 100 labels per class, respectively.<br

    Uncertainty-Aware Consistency Regularization for Cross-Domain Semantic Segmentation

    Full text link
    Unsupervised domain adaptation (UDA) aims to adapt existing models of the source domain to a new target domain with only unlabeled data. Many adversarial-based UDA methods involve high-instability training and have to carefully tune the optimization procedure. Some non-adversarial UDA methods employ a consistency regularization on the target predictions of a student model and a teacher model under different perturbations, where the teacher shares the same architecture with the student and is updated by the exponential moving average of the student. However, these methods suffer from noticeable negative transfer resulting from either the error-prone discriminator network or the unreasonable teacher model. In this paper, we propose an uncertainty-aware consistency regularization method for cross-domain semantic segmentation. By exploiting the latent uncertainty information of the target samples, more meaningful and reliable knowledge from the teacher model can be transferred to the student model. In addition, we further reveal the reason why the current consistency regularization is often unstable in minimizing the distribution discrepancy. We also show that our method can effectively ease this issue by mining the most reliable and meaningful samples with a dynamic weighting scheme of consistency loss. Experiments demonstrate that the proposed method outperforms the state-of-the-art methods on two domain adaptation benchmarks, i.e.,i.e., GTAV →\rightarrow Cityscapes and SYNTHIA →\rightarrow Cityscapes

    ACTION++: Improving Semi-supervised Medical Image Segmentation with Adaptive Anatomical Contrast

    Full text link
    Medical data often exhibits long-tail distributions with heavy class imbalance, which naturally leads to difficulty in classifying the minority classes (i.e., boundary regions or rare objects). Recent work has significantly improved semi-supervised medical image segmentation in long-tailed scenarios by equipping them with unsupervised contrastive criteria. However, it remains unclear how well they will perform in the labeled portion of data where class distribution is also highly imbalanced. In this work, we present ACTION++, an improved contrastive learning framework with adaptive anatomical contrast for semi-supervised medical segmentation. Specifically, we propose an adaptive supervised contrastive loss, where we first compute the optimal locations of class centers uniformly distributed on the embedding space (i.e., off-line), and then perform online contrastive matching training by encouraging different class features to adaptively match these distinct and uniformly distributed class centers. Moreover, we argue that blindly adopting a constant temperature Ï„\tau in the contrastive loss on long-tailed medical data is not optimal, and propose to use a dynamic Ï„\tau via a simple cosine schedule to yield better separation between majority and minority classes. Empirically, we evaluate ACTION++ on ACDC and LA benchmarks and show that it achieves state-of-the-art across two semi-supervised settings. Theoretically, we analyze the performance of adaptive anatomical contrast and confirm its superiority in label efficiency.Comment: Accepted by International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2023

    DACS: Domain Adaptation via Cross-domain Mixed Sampling

    Full text link
    Semantic segmentation models based on convolutional neural networks have recently displayed remarkable performance for a multitude of applications. However, these models typically do not generalize well when applied on new domains, especially when going from synthetic to real data. In this paper we address the problem of unsupervised domain adaptation (UDA), which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain). Existing methods have seen success by training on pseudo-labels for these unlabelled images. Multiple techniques have been proposed to mitigate low-quality pseudo-labels arising from the domain shift, with varying degrees of success. We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-labels. These mixed samples are then trained on, in addition to the labelled data itself. We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes, a common synthetic-to-real semantic segmentation benchmark for UDA.Comment: This paper has been accepted to WACV202
    • …
    corecore