Label Calibration for Semantic Segmentation Under Domain Shift

Abstract

Performance of a pre-trained semantic segmentation model is likely to substantially decrease on data from a new domain. We show a pre-trained model can be adapted to unlabelled target domain data by calculating soft-label prototypes under the domain shift and making predictions according to the prototype closest to the vector with predicted class probabilities. The proposed adaptation procedure is fast, comes almost for free in terms of computational resources and leads to considerable performance improvements. We demonstrate the benefits of such label calibration on the highly-practical synthetic-to-real semantic segmentation problem.Comment: ICLR 2023 Workshop on Pitfalls of Limited Data and Computation for Trustworthy M

    Similar works

    Full text

    thumbnail-image

    Available Versions