12,390 research outputs found
Dual Adversarial Alignment for Realistic Support-Query Shift Few-shot Learning
Support-query shift few-shot learning aims to classify unseen examples (query
set) to labeled data (support set) based on the learned embedding in a
low-dimensional space under a distribution shift between the support set and
the query set. However, in real-world scenarios the shifts are usually unknown
and varied, making it difficult to estimate in advance. Therefore, in this
paper, we propose a novel but more difficult challenge, RSQS, focusing on
Realistic Support-Query Shift few-shot learning. The key feature of RSQS is
that the individual samples in a meta-task are subjected to multiple
distribution shifts in each meta-task. In addition, we propose a unified
adversarial feature alignment method called DUal adversarial ALignment
framework (DuaL) to relieve RSQS from two aspects, i.e., inter-domain bias and
intra-domain variance. On the one hand, for the inter-domain bias, we corrupt
the original data in advance and use the synthesized perturbed inputs to train
the repairer network by minimizing distance in the feature level. On the other
hand, for intra-domain variance, we proposed a generator network to synthesize
hard, i.e., less similar, examples from the support set in a self-supervised
manner and introduce regularized optimal transportation to derive a smooth
optimal transportation plan. Lastly, a benchmark of RSQS is built with several
state-of-the-art baselines among three datasets (CIFAR100, mini-ImageNet, and
Tiered-Imagenet). Experiment results show that DuaL significantly outperforms
the state-of-the-art methods in our benchmark.Comment: Best student paper in PAKDD 202
Semi-Supervised Learning by Augmented Distribution Alignment
In this work, we propose a simple yet effective semi-supervised learning
approach called Augmented Distribution Alignment. We reveal that an essential
sampling bias exists in semi-supervised learning due to the limited number of
labeled samples, which often leads to a considerable empirical distribution
mismatch between labeled data and unlabeled data. To this end, we propose to
align the empirical distributions of labeled and unlabeled data to alleviate
the bias. On one hand, we adopt an adversarial training strategy to minimize
the distribution distance between labeled and unlabeled data as inspired by
domain adaptation works. On the other hand, to deal with the small sample size
issue of labeled data, we also propose a simple interpolation strategy to
generate pseudo training samples. Those two strategies can be easily
implemented into existing deep neural networks. We demonstrate the
effectiveness of our proposed approach on the benchmark SVHN and CIFAR10
datasets. Our code is available at \url{https://github.com/qinenergy/adanet}.Comment: To appear in ICCV 201
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Transferability captures the ability of an attack against a machine-learning
model to be effective against a different, potentially unknown, model.
Empirical evidence for transferability has been shown in previous work, but the
underlying reasons why an attack transfers or not are not yet well understood.
In this paper, we present a comprehensive analysis aimed to investigate the
transferability of both test-time evasion and training-time poisoning attacks.
We provide a unifying optimization framework for evasion and poisoning attacks,
and a formal definition of transferability of such attacks. We highlight two
main factors contributing to attack transferability: the intrinsic adversarial
vulnerability of the target model, and the complexity of the surrogate model
used to optimize the attack. Based on these insights, we define three metrics
that impact an attack's transferability. Interestingly, our results derived
from theoretical analysis hold for both evasion and poisoning attacks, and are
confirmed experimentally using a wide range of linear and non-linear
classifiers and datasets
Crossing Generative Adversarial Networks for Cross-View Person Re-identification
Person re-identification (\textit{re-id}) refers to matching pedestrians
across disjoint yet non-overlapping camera views. The most effective way to
match these pedestrians undertaking significant visual variations is to seek
reliably invariant features that can describe the person of interest
faithfully. Most of existing methods are presented in a supervised manner to
produce discriminative features by relying on labeled paired images in
correspondence. However, annotating pair-wise images is prohibitively expensive
in labors, and thus not practical in large-scale networked cameras. Moreover,
seeking comparable representations across camera views demands a flexible model
to address the complex distributions of images. In this work, we study the
co-occurrence statistic patterns between pairs of images, and propose to
crossing Generative Adversarial Network (Cross-GAN) for learning a joint
distribution for cross-image representations in a unsupervised manner. Given a
pair of person images, the proposed model consists of the variational
auto-encoder to encode the pair into respective latent variables, a proposed
cross-view alignment to reduce the view disparity, and an adversarial layer to
seek the joint distribution of latent representations. The learned latent
representations are well-aligned to reflect the co-occurrence patterns of
paired images. We empirically evaluate the proposed model against challenging
datasets, and our results show the importance of joint invariant features in
improving matching rates of person re-id with comparison to semi/unsupervised
state-of-the-arts.Comment: 12 pages. arXiv admin note: text overlap with arXiv:1702.03431 by
other author
- …