65 research outputs found

    Zero-Shot Deep Domain Adaptation

    Full text link
    Domain adaptation is an important tool to transfer knowledge about a task (e.g. classification) learned in a source domain to a second, or target domain. Current approaches assume that task-relevant target-domain data is available during training. We demonstrate how to perform domain adaptation when no such task-relevant target-domain data is available. To tackle this issue, we propose zero-shot deep domain adaptation (ZDDA), which uses privileged information from task-irrelevant dual-domain pairs. ZDDA learns a source-domain representation which is not only tailored for the task of interest but also close to the target-domain representation. Therefore, the source-domain task of interest solution (e.g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations. Using the MNIST, Fashion-MNIST, NIST, EMNIST, and SUN RGB-D datasets, we show that ZDDA can perform domain adaptation in classification tasks without access to task-relevant target-domain training data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene classification task by simulating task-relevant target-domain representations with task-relevant source-domain data. To the best of our knowledge, ZDDA is the first domain adaptation and sensor fusion method which requires no task-relevant target-domain data. The underlying principle is not particular to computer vision data, but should be extensible to other domains.Comment: This paper is accepted to the European Conference on Computer Vision (ECCV), 201

    Learning to Generate Novel Domains for Domain Generalization

    Get PDF
    This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains. A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model's ability to learn to generalize. We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains. This explicitly increases the diversity of available training domains and leads to a more generalizable model. To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence. To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator. Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets.Comment: To appear in ECCV'2

    Scribble-based Domain Adaptation via Co-segmentation

    Full text link
    Although deep convolutional networks have reached state-of-the-art performance in many medical image segmentation tasks, they have typically demonstrated poor generalisation capability. To be able to generalise from one domain (e.g. one imaging modality) to another, domain adaptation has to be performed. While supervised methods may lead to good performance, they require to fully annotate additional data which may not be an option in practice. In contrast, unsupervised methods don't need additional annotations but are usually unstable and hard to train. In this work, we propose a novel weakly-supervised method. Instead of requiring detailed but time-consuming annotations, scribbles on the target domain are used to perform domain adaptation. This paper introduces a new formulation of domain adaptation based on structured learning and co-segmentation. Our method is easy to train, thanks to the introduction of a regularised loss. The framework is validated on Vestibular Schwannoma segmentation (T1 to T2 scans). Our proposed method outperforms unsupervised approaches and achieves comparable performance to a fully-supervised approach.Comment: Accepted at MICCAI 202

    Room temperature antiferromagnetic order in superconducting X_yFe_{2-x}Se_ 2, (X= Rb, K): a powder neutron diffraction study

    Full text link
    Magnetic and crystal structures of superconducting X yFe 2-xSe 2 (X= Rb and K with Tc=31.5 K and 29.5 K) have been studied by neutron powder diffraction at room temperature. Both crystals show ordered iron vacancy pattern and the crystal structure is well described in the I4/m space group with the lattice constants a=8.799, c=14.576 and a=8.730, c=14.115 A, and the refined stoichiometry x=0.30(1), y=0.83(2) and x=0.34(1), y=0.83(1) for Rb- and K-crystals respectively. The structure contains one fully occupied iron position and one almost empty vacancy position. Assuming that the iron moment is ordered only on the fully occupied site we have sorted out all eight irreducible representations (irreps) for the propagation vector k=0 and have found that irreps tau_2 and tau_7 well fit the experimental data with the moments along c-axis. The moment amplitudes amounted to 2.15(3) mu_B, 2.55(3) mu_B for tau_2 and 2.08(6) mu_B, 2.57(3) mu_B for tau_7 for Rb- and K-crystals respectively. Irrep tau_2 corresponds to the Shubnikov group I4/m' and gives a constant moment antiferromagnetic configuration, whereas tau_7 does not have Shubnikov counterpart and allows two different magnetic moments in the structure.Comment: 5 pages, 1 table, 4 figure

    Introduction

    No full text

    Unsupervised Domain Adaptation with Noise Resistible Mutual-Training for Person Re-identification

    Get PDF
    © 2020, Springer Nature Switzerland AG. Unsupervised domain adaptation (UDA) in the task of person re-identification (re-ID) is highly challenging due to large domain divergence and no class overlap between domains. Pseudo-label based self-training is one of the representative techniques to address UDA. However, label noise caused by unsupervised clustering is always a trouble to self-training methods. To depress noises in pseudo-labels, this paper proposes a Noise Resistible Mutual-Training (NRMT) method, which maintains two networks during training to perform collaborative clustering and mutual instance selection. On one hand, collaborative clustering eases the fitting to noisy instances by allowing the two networks to use pseudo-labels provided by each other as an additional supervision. On the other hand, mutual instance selection further selects reliable and informative instances for training according to the peer-confidence and relationship disagreement of the networks. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art UDA methods for person re-ID
    corecore