4 research outputs found

    Cross-Domain Grouping and Alignment for Domain Adaptive Semantic Segmentation

    Full text link
    Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) deal with all the samples from the two domains in a global or category-aware manner. They do not consider an inter-class variation within the target domain itself or estimated category, providing the limitation to encode the domains having a multi-modal data distribution. To overcome this limitation, we introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment. To cluster the samples across domains with an aim to maximize the domain alignment without forgetting precise segmentation ability on the source domain, we present two loss functions, in particular, for encouraging semantic consistency and orthogonality among the clusters. We also present a loss so as to solve a class imbalance problem, which is the other limitation of the previous methods. Our experiments show that our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.Comment: AAAI 202

    Unpaired Cross-Spectral Pedestrian Detection Via Adversarial Feature Learning

    No full text
    Even though there exist significant advances in recent studies, existing methods for pedestrian detection still have shown limited performances under challenging illumination conditions especially at nighttime. To address this, cross-spectral pedestrian detection methods have been presented using color and thermal, and shown substantial performance gains on the challenging circumstances. However, their paired cross-spectral settings have limited applicability in real-world scenarios. To overcome this, we propose a novel learning framework for cross-spectral pedestrian detection in an unpaired setting. Based on an assumption that features from color and thermal images share their characteristics in a common feature space to benefit their complement information, we design the separate feature embedding networks for color and thermal images followed by sharing detection networks. To further improve the cross-spectral feature representation, we apply an adversarial learning scheme to intermediate features of the color and thermal images. Experiments demonstrate the outstanding performance of the proposed method on the KAIST multi-spectral benchmark in comparison to the state-of-the-art methods
    corecore