110,576 research outputs found

    Learning generalizable and transferable representations across domains and modalities

    Get PDF
    While deep neural networks attain state-of-the-art performance for computer vision tasks with the help of massive supervised datasets, it is usually assumed that all train and test examples are drawn independently from the same distribution. However, in real-world applications, dataset bias and domain shift violate this assumption. Test data can be from different domains represented by different distributions, which can seriously affect the model performance. Thus, learning generalizable and transferable representations is important to make a model robust to many different types of distributional shift. Domain transfer such as Domain Adaptation (DA) and Domain Generalization (DG) have been proposed to learn generalizable and transferable features across domains. Domain transfer consists of two steps: 1) pre-training, where a model is first pre-trained on an upstream task with a massive supervised dataset, e.g., ImageNet, and 2) transfer (adaptation), where the model is fine-tuned on downstream multi-domain data. In this thesis, we highlight the limitations of current domain transfer approaches and relax the limitations to produce more practical and diverse domain transfer methods. To be specific, we study: 1) Cross-Domain Self-supervised Learning for Domain Adaptation. Prior DA methods use ImageNet pre-trained models as a weight initialization (i.e., pre-training stage). However, the downstream data can be very different from that of ImageNet. Previous domain adaptation approaches assume there are many labeled data in the source domain. Some applications (e.g., Medical Imaging) may not have enough source labels. We explore the problem of few-shot domain adaptation where we only have a few source labels. In addition, we propose cross-domain self-supervised pre-training, which uses only unlabeled multi-domain data. We show that our method significantly boosts the performance of diverse domain transfer tasks. 2) Pre-training for Domain Adaptation. While many DA and DG methods have been proposed and studied extensively in prior work, little attention has been paid to pre-training for domain transfer. We provide comprehensive experiments and an in-depth analysis of pre-training in terms of network architectures, datasets, and loss functions. Finally, we observe significant improvements from the modern pre-training and propose to modernize the current evaluation protocols. 3) Multimodal Representation Learning for Domain Adaptation. We devise self-supervised formulations for multimodal domain adaptation where we promote better knowledge transfer by aligning multimodal features. We first explore a language-vision task where we align the features of multiple languages and images. Then, we explore video domain adaptation with RGB and Flow modalities and propose a joint contrastive regularization that interplays among cross-modal and cross-domain features. 4) Domain Adaptive Keypoint Detection. Lastly, we explore domain adaptive keypoint detection tasks (e.g., human and animal pose estimation) which are not well explored in prior work. We propose a unified framework for diverse keypoint detection scenarios, where we can encounter different types of domain shifts. To handle these domain shifts, we propose a multi-level feature alignment using the input-level and output-level cues and show that our method generalizes well to diverse domain adaptive keypoint detection tasks

    DA-RDD: toward domain adaptive road damage detection across different countries

    Get PDF
    Recent advances on road damage detection relies on a large amount of labeled data, whilst collecting pavement image is labor-intensive and time-consuming. Unsupervised Domain Adaptation (UDA) provides a promising solution to adapt a source domain to the target domain, however, cross-domain crack detection is still an open problem. In this paper, we propose domain adaptive road damage detection termed as DA-RDD, by incorporating image-level with instance-level feature alignment for domain-invariant representation learning in an adversarial manner. Specifically, importance weighting is introduced to evaluate the intermediate samples for image-level alignment between domains, and we aggregate RoI-wise feature with multi-scale contextual information to recover the crack details for progressive domain alignment at instance level. Additionally, a large-scale road damage dataset (based on Road Damage Dataset 2020 (RDD2020)) named as RDD2021 is constructed with 100k synthetic labeled distress images. Extensive experimental results on damage detection across different countries demonstrate the universality and superiority of DA-RDD, and empirical studies on RDD2021 further claim its effectiveness and advancement. To our best knowledge, it is the first time to investigate domain adaptative pavement crack detection, and we expect the contributions in this work would facilitate the development of generalized road damage detection in the future
    • …
    corecore