36 research outputs found
Exploring Object Relation in Mean Teacher for Cross-Domain Detection
Rendering synthetic data (e.g., 3D CAD-rendered images) to generate
annotations for learning deep models in vision tasks has attracted increasing
attention in recent years. However, simply applying the models learnt on
synthetic images may lead to high generalization error on real images due to
domain shift. To address this issue, recent progress in cross-domain
recognition has featured the Mean Teacher, which directly simulates
unsupervised domain adaptation as semi-supervised learning. The domain gap is
thus naturally bridged with consistency regularization in a teacher-student
scheme. In this work, we advance this Mean Teacher paradigm to be applicable
for cross-domain detection. Specifically, we present Mean Teacher with Object
Relations (MTOR) that novelly remolds Mean Teacher under the backbone of Faster
R-CNN by integrating the object relations into the measure of consistency cost
between teacher and student modules. Technically, MTOR firstly learns
relational graphs that capture similarities between pairs of regions for
teacher and student respectively. The whole architecture is then optimized with
three consistency regularizations: 1) region-level consistency to align the
region-level predictions between teacher and student, 2) inter-graph
consistency for matching the graph structures between teacher and student, and
3) intra-graph consistency to enhance the similarity between regions of same
class within the graph of student. Extensive experiments are conducted on the
transfers across Cityscapes, Foggy Cityscapes, and SIM10k, and superior results
are reported when comparing to state-of-the-art approaches. More remarkably, we
obtain a new record of single model: 22.8% of mAP on Syn2Real detection
dataset.Comment: CVPR 2019; The codes and model of our MTOR are publicly available at:
https://github.com/caiqi/mean-teacher-cross-domain-detectio
Bi-Directional Generation for Unsupervised Domain Adaptation
Unsupervised domain adaptation facilitates the unlabeled target domain
relying on well-established source domain information. The conventional methods
forcefully reducing the domain discrepancy in the latent space will result in
the destruction of intrinsic data structure. To balance the mitigation of
domain gap and the preservation of the inherent structure, we propose a
Bi-Directional Generation domain adaptation model with consistent classifiers
interpolating two intermediate domains to bridge source and target domains.
Specifically, two cross-domain generators are employed to synthesize one domain
conditioned on the other. The performance of our proposed method can be further
enhanced by the consistent classifiers and the cross-domain alignment
constraints. We also design two classifiers which are jointly optimized to
maximize the consistency on target sample prediction. Extensive experiments
verify that our proposed model outperforms the state-of-the-art on standard
cross domain visual benchmarks.Comment: 9 pages, 4 figure