218 research outputs found
Transfer Adversarial Hashing for Hamming Space Retrieval
Hashing is widely applied to large-scale image retrieval due to the storage
and retrieval efficiency. Existing work on deep hashing assumes that the
database in the target domain is identically distributed with the training set
in the source domain. This paper relaxes this assumption to a transfer
retrieval setting, which allows the database and the training set to come from
different but relevant domains. However, the transfer retrieval setting will
introduce two technical difficulties: first, the hash model trained on the
source domain cannot work well on the target domain due to the large
distribution gap; second, the domain gap makes it difficult to concentrate the
database points to be within a small Hamming ball. As a consequence, transfer
retrieval performance within Hamming Radius 2 degrades significantly in
existing hashing methods. This paper presents Transfer Adversarial Hashing
(TAH), a new hybrid deep architecture that incorporates a pairwise
-distribution cross-entropy loss to learn concentrated hash codes and an
adversarial network to align the data distributions between the source and
target domains. TAH can generate compact transfer hash codes for efficient
image retrieval on both source and target domains. Comprehensive experiments
validate that TAH yields state of the art Hamming space retrieval performance
on standard datasets
Multi-Adversarial Domain Adaptation
Recent advances in deep domain adaptation reveal that adversarial learning
can be embedded into deep networks to learn transferable features that reduce
distribution discrepancy between the source and target domains. Existing domain
adversarial adaptation methods based on single domain discriminator only align
the source and target data distributions without exploiting the complex
multimode structures. In this paper, we present a multi-adversarial domain
adaptation (MADA) approach, which captures multimode structures to enable
fine-grained alignment of different data distributions based on multiple domain
discriminators. The adaptation can be achieved by stochastic gradient descent
with the gradients computed by back-propagation in linear-time. Empirical
evidence demonstrates that the proposed model outperforms state of the art
methods on standard domain adaptation datasets.Comment: AAAI 2018 Oral. arXiv admin note: substantial text overlap with
arXiv:1705.10667, arXiv:1707.0790
Partial Transfer Learning with Selective Adversarial Networks
Adversarial learning has been successfully embedded into deep networks to
learn transferable features, which reduce distribution discrepancy between the
source and target domains. Existing domain adversarial networks assume fully
shared label space across domains. In the presence of big data, there is strong
motivation of transferring both classification and representation models from
existing big domains to unknown small domains. This paper introduces partial
transfer learning, which relaxes the shared label space assumption to that the
target label space is only a subspace of the source label space. Previous
methods typically match the whole source domain to the target domain, which are
prone to negative transfer for the partial transfer problem. We present
Selective Adversarial Network (SAN), which simultaneously circumvents negative
transfer by selecting out the outlier source classes and promotes positive
transfer by maximally matching the data distributions in the shared label
space. Experiments demonstrate that our models exceed state-of-the-art results
for partial transfer learning tasks on several benchmark datasets
- …