Panoptic Scene Graph Generation (PSG) parses objects and predicts their
relationships (predicate) to connect human language and visual scenes. However,
different language preferences of annotators and semantic overlaps between
predicates lead to biased predicate annotations in the dataset, i.e. different
predicates for same object pairs. Biased predicate annotations make PSG models
struggle in constructing a clear decision plane among predicates, which greatly
hinders the real application of PSG models. To address the intrinsic bias
above, we propose a novel framework named ADTrans to adaptively transfer biased
predicate annotations to informative and unified ones. To promise consistency
and accuracy during the transfer process, we propose to measure the invariance
of representations in each predicate class, and learn unbiased prototypes of
predicates with different intensities. Meanwhile, we continuously measure the
distribution changes between each presentation and its prototype, and
constantly screen potential biased data. Finally, with the unbiased
predicate-prototype representation embedding space, biased annotations are
easily identified. Experiments show that ADTrans significantly improves the
performance of benchmark models, achieving a new state-of-the-art performance,
and shows great generalization and effectiveness on multiple datasets