In unsupervised domain adaptation (UDA), a model trained on source data (e.g.
synthetic) is adapted to target data (e.g. real-world) without access to target
annotation. Most previous UDA methods struggle with classes that have a similar
visual appearance on the target domain as no ground truth is available to learn
the slight appearance differences. To address this problem, we propose a Masked
Image Consistency (MIC) module to enhance UDA by learning spatial context
relations of the target domain as additional clues for robust visual
recognition. MIC enforces the consistency between predictions of masked target
images, where random patches are withheld, and pseudo-labels that are generated
based on the complete image by an exponential moving average teacher. To
minimize the consistency loss, the network has to learn to infer the
predictions of the masked regions from their context. Due to its simple and
universal concept, MIC can be integrated into various UDA methods across
different visual recognition tasks such as image classification, semantic
segmentation, and object detection. MIC significantly improves the
state-of-the-art performance across the different recognition tasks for
synthetic-to-real, day-to-nighttime, and clear-to-adverse-weather UDA. For
instance, MIC achieves an unprecedented UDA performance of 75.9 mIoU and 92.8%
on GTA-to-Cityscapes and VisDA-2017, respectively, which corresponds to an
improvement of +2.1 and +3.0 percent points over the previous state of the art.
The implementation is available at https://github.com/lhoyer/MIC.Comment: CVPR 202