7 research outputs found
Exploring Object Relation in Mean Teacher for Cross-Domain Detection
Rendering synthetic data (e.g., 3D CAD-rendered images) to generate
annotations for learning deep models in vision tasks has attracted increasing
attention in recent years. However, simply applying the models learnt on
synthetic images may lead to high generalization error on real images due to
domain shift. To address this issue, recent progress in cross-domain
recognition has featured the Mean Teacher, which directly simulates
unsupervised domain adaptation as semi-supervised learning. The domain gap is
thus naturally bridged with consistency regularization in a teacher-student
scheme. In this work, we advance this Mean Teacher paradigm to be applicable
for cross-domain detection. Specifically, we present Mean Teacher with Object
Relations (MTOR) that novelly remolds Mean Teacher under the backbone of Faster
R-CNN by integrating the object relations into the measure of consistency cost
between teacher and student modules. Technically, MTOR firstly learns
relational graphs that capture similarities between pairs of regions for
teacher and student respectively. The whole architecture is then optimized with
three consistency regularizations: 1) region-level consistency to align the
region-level predictions between teacher and student, 2) inter-graph
consistency for matching the graph structures between teacher and student, and
3) intra-graph consistency to enhance the similarity between regions of same
class within the graph of student. Extensive experiments are conducted on the
transfers across Cityscapes, Foggy Cityscapes, and SIM10k, and superior results
are reported when comparing to state-of-the-art approaches. More remarkably, we
obtain a new record of single model: 22.8% of mAP on Syn2Real detection
dataset.Comment: CVPR 2019; The codes and model of our MTOR are publicly available at:
https://github.com/caiqi/mean-teacher-cross-domain-detectio
SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning
A steady momentum of innovations and breakthroughs has convincingly pushed
the limits of unsupervised image representation learning. Compared to static 2D
images, video has one more dimension (time). The inherent supervision existing
in such sequential structure offers a fertile ground for building unsupervised
learning models. In this paper, we compose a trilogy of exploring the basic and
generic supervision in the sequence from spatial, spatiotemporal and sequential
perspectives. We materialize the supervisory signals through determining
whether a pair of samples is from one frame or from one video, and whether a
triplet of samples is in the correct temporal order. We uniquely regard the
signals as the foundation in contrastive learning and derive a particular form
named Sequence Contrastive Learning (SeCo). SeCo shows superior results under
the linear protocol on action recognition (Kinetics), untrimmed activity
recognition (ActivityNet) and object tracking (OTB-100). More remarkably, SeCo
demonstrates considerable improvements over recent unsupervised pre-training
techniques, and leads the accuracy by 2.96% and 6.47% against fully-supervised
ImageNet pre-training in action recognition task on UCF101 and HMDB51,
respectively. Source code is available at
\url{https://github.com/YihengZhang-CV/SeCo-Sequence-Contrastive-Learning}.Comment: AAAI 2021; Code is publicly available at:
https://github.com/YihengZhang-CV/SeCo-Sequence-Contrastive-Learnin