24 research outputs found

    Unsupervised Domain Adaptation for 3D Keypoint Estimation via View Consistency

    Full text link
    In this paper, we introduce a novel unsupervised domain adaptation technique for the task of 3D keypoint prediction from a single depth scan or image. Our key idea is to utilize the fact that predictions from different views of the same or similar objects should be consistent with each other. Such view consistency can provide effective regularization for keypoint prediction on unlabeled instances. In addition, we introduce a geometric alignment term to regularize predictions in the target domain. The resulting loss function can be effectively optimized via alternating minimization. We demonstrate the effectiveness of our approach on real datasets and present experimental results showing that our approach is superior to state-of-the-art general-purpose domain adaptation techniques.Comment: ECCV 201

    Statistically Motivated Second Order Pooling

    Get PDF
    Second-order pooling, a.k.a.~bilinear pooling, has proven effective for deep learning based visual recognition. However, the resulting second-order networks yield a final representation that is orders of magnitude larger than that of standard, first-order ones, making them memory-intensive and cumbersome to deploy. Here, we introduce a general, parametric compression strategy that can produce more compact representations than existing compression techniques, yet outperform both compressed and uncompressed second-order models. Our approach is motivated by a statistical analysis of the network's activations, relying on operations that lead to a Gaussian-distributed final representation, as inherently used by first-order deep networks. As evidenced by our experiments, this lets us outperform the state-of-the-art first-order and second-order models on several benchmark recognition datasets.Comment: Accepted to ECCV 2018. Camera ready version. 14 page, 5 figures, 3 table

    Zero-Shot Deep Domain Adaptation

    Full text link
    Domain adaptation is an important tool to transfer knowledge about a task (e.g. classification) learned in a source domain to a second, or target domain. Current approaches assume that task-relevant target-domain data is available during training. We demonstrate how to perform domain adaptation when no such task-relevant target-domain data is available. To tackle this issue, we propose zero-shot deep domain adaptation (ZDDA), which uses privileged information from task-irrelevant dual-domain pairs. ZDDA learns a source-domain representation which is not only tailored for the task of interest but also close to the target-domain representation. Therefore, the source-domain task of interest solution (e.g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations. Using the MNIST, Fashion-MNIST, NIST, EMNIST, and SUN RGB-D datasets, we show that ZDDA can perform domain adaptation in classification tasks without access to task-relevant target-domain training data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene classification task by simulating task-relevant target-domain representations with task-relevant source-domain data. To the best of our knowledge, ZDDA is the first domain adaptation and sensor fusion method which requires no task-relevant target-domain data. The underlying principle is not particular to computer vision data, but should be extensible to other domains.Comment: This paper is accepted to the European Conference on Computer Vision (ECCV), 201

    A Domain-Adaptive Two-Stream U-Net for Electron Microscopy Image Segmentation

    Get PDF
    Deep networks such as the U-Net are outstanding at segmenting biomedical images when enough training data is available, but only then. Here we introduce a Domain Adaptation approach that relies on two coupled U-Nets that either regularize or share corresponding weights between the two streams, along with a differentiable loss function that approximates the Jaccard index, to leverage training data from one domain in which it is plentiful, to adapt the network weights in another where it is scarce. We showcase our approach for the purpose of segmenting mitochondria and synapses from electron microscopy image stacks of mouse brain, when we have enough training data for one brain region but only very little for another. In such cases, we outperform state-of-the-art Domain Adaptation methods
    corecore