91,008 research outputs found
Adversarial Discriminative Heterogeneous Face Recognition
The gap between sensing patterns of different face modalities remains a
challenging problem in heterogeneous face recognition (HFR). This paper
proposes an adversarial discriminative feature learning framework to close the
sensing gap via adversarial learning on both raw-pixel space and compact
feature space. This framework integrates cross-spectral face hallucination and
discriminative feature learning into an end-to-end adversarial network. In the
pixel space, we make use of generative adversarial networks to perform
cross-spectral face hallucination. An elaborate two-path model is introduced to
alleviate the lack of paired images, which gives consideration to both global
structures and local textures. In the feature space, an adversarial loss and a
high-order variance discrepancy loss are employed to measure the global and
local discrepancy between two heterogeneous distributions respectively. These
two losses enhance domain-invariant feature learning and modality independent
noise removing. Experimental results on three NIR-VIS databases show that our
proposed approach outperforms state-of-the-art HFR methods, without requiring
of complex network or large-scale training dataset
Learning from Adversarial Features for Few-Shot Classification
Many recent few-shot learning methods concentrate on designing novel model
architectures. In this paper, we instead show that with a simple backbone
convolutional network we can even surpass state-of-the-art classification
accuracy. The essential part that contributes to this superior performance is
an adversarial feature learning strategy that improves the generalization
capability of our model. In this work, adversarial features are those features
that can cause the classifier uncertain about its prediction. In order to
generate adversarial features, we firstly locate adversarial regions based on
the derivative of the entropy with respect to an averaging mask. Then we use
the adversarial region attention to aggregate the feature maps to obtain the
adversarial features. In this way, we can explore and exploit the entire
spatial area of the feature maps to mine more diverse discriminative knowledge.
We perform extensive model evaluations and analyses on miniImageNet and
tieredImageNet datasets demonstrating the effectiveness of the proposed method
Controllable Invariance through Adversarial Feature Learning
Learning meaningful representations that maintain the content necessary for a
particular task while filtering away detrimental variations is a problem of
great interest in machine learning. In this paper, we tackle the problem of
learning representations invariant to a specific factor or trait of data. The
representation learning process is formulated as an adversarial minimax game.
We analyze the optimal equilibrium of such a game and find that it amounts to
maximizing the uncertainty of inferring the detrimental factor given the
representation while maximizing the certainty of making task-specific
predictions. On three benchmark tasks, namely fair and bias-free
classification, language-independent generation, and lighting-independent image
classification, we show that the proposed framework induces an invariant
representation, and leads to better generalization evidenced by the improved
performance.Comment: NIPS 201
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples
Feature squeezing is a recently-introduced framework for mitigating and
detecting adversarial examples. In previous work, we showed that it is
effective against several earlier methods for generating adversarial examples.
In this short note, we report on recent results showing that simple feature
squeezing techniques also make deep learning models significantly more robust
against the Carlini/Wagner attacks, which are the best known adversarial
methods discovered to date
Pose-aware Adversarial Domain Adaptation for Personalized Facial Expression Recognition
Current facial expression recognition methods fail to simultaneously cope
with pose and subject variations.
In this paper, we propose a novel unsupervised adversarial domain adaptation
method which can alleviate both variations at the same time. Specially, our
method consists of three learning strategies: adversarial domain adaptation
learning, cross adversarial feature learning, and reconstruction learning. The
first aims to learn pose- and expression-related feature representations in the
source domain and adapt both feature distributions to that of the target domain
by imposing adversarial learning. By using personalized adversarial domain
adaptation, this learning strategy can alleviate subject variations and exploit
information from the source domain to help learning in the target domain.
The second serves to perform feature disentanglement between pose- and
expression-related feature representations by impulsing pose-related feature
representations expression-undistinguished and the expression-related feature
representations pose-undistinguished.
The last can further boost feature learning by applying face image
reconstructions so that the learned expression-related feature representations
are more pose- and identity-robust.
Experimental results on four benchmark datasets demonstrate the effectiveness
of the proposed method
Towards Adversarial Configurations for Software Product Lines
Ensuring that all supposedly valid configurations of a software product line
(SPL) lead to well-formed and acceptable products is challenging since it is
most of the time impractical to enumerate and test all individual products of
an SPL. Machine learning classifiers have been recently used to predict the
acceptability of products associated with unseen configurations. For some
configurations, a tiny change in their feature values can make them pass from
acceptable to non-acceptable regarding users' requirements and vice-versa. In
this paper, we introduce the idea of leveraging these specific configurations
and their positions in the feature space to improve the classifier and
therefore the engineering of an SPL. Starting from a variability model, we
propose to use Adversarial Machine Learning techniques to create new,
adversarial configurations out of already known configurations by modifying
their feature values. Using an industrial video generator we show how
adversarial configurations can improve not only the classifier, but also the
variability model, the variability implementation, and the testing oracle
Multi-Level Generative Models for Partial Label Learning with Non-random Label Noise
Partial label (PL) learning tackles the problem where each training instance
is associated with a set of candidate labels that include both the true label
and irrelevant noise labels. In this paper, we propose a novel multi-level
generative model for partial label learning (MGPLL), which tackles the problem
by learning both a label level adversarial generator and a feature level
adversarial generator under a bi-directional mapping framework between the
label vectors and the data samples. Specifically, MGPLL uses a conditional
noise label generation network to model the non-random noise labels and perform
label denoising, and uses a multi-class predictor to map the training instances
to the denoised label vectors, while a conditional data feature generator is
used to form an inverse mapping from the denoised label vectors to data
samples. Both the noise label generator and the data feature generator are
learned in an adversarial manner to match the observed candidate labels and
data features respectively. Extensive experiments are conducted on synthesized
and real-world partial label datasets. The proposed approach demonstrates the
state-of-the-art performance for partial label learning
FineFool: Fine Object Contour Attack via Attention
Machine learning models have been shown vulnerable to adversarial attacks
launched by adversarial examples which are carefully crafted by attacker to
defeat classifiers. Deep learning models cannot escape the attack either. Most
of adversarial attack methods are focused on success rate or perturbations
size, while we are more interested in the relationship between adversarial
perturbation and the image itself. In this paper, we put forward a novel
adversarial attack based on contour, named FineFool. Finefool not only has
better attack performance compared with other state-of-art white-box attacks in
aspect of higher attack success rate and smaller perturbation, but also capable
of visualization the optimal adversarial perturbation via attention on object
contour. To the best of our knowledge, Finefool is for the first time combines
the critical feature of the original clean image with the optimal perturbations
in a visible manner. Inspired by the correlations between adversarial
perturbations and object contour, slighter perturbations is produced via
focusing on object contour features, which is more imperceptible and difficult
to be defended, especially network add-on defense methods with the trade-off
between perturbations filtering and contour feature loss. Compared with
existing state-of-art attacks, extensive experiments are conducted to show that
Finefool is capable of efficient attack against defensive deep models
Adversarial Feature Sampling Learning for Efficient Visual Tracking
The tracking-by-detection framework usually consist of two stages: drawing
samples around the target object in the first stage and classifying each sample
as the target object or background in the second stage. Current popular
trackers based on tracking-by-detection framework typically draw samples in the
raw image as the inputs of deep convolution networks in the first stage, which
usually results in high computational burden and low running speed. In this
paper, we propose a new visual tracking method using sampling deep
convolutional features to address this problem. Only one cropped image around
the target object is input into the designed deep convolution network and the
samples is sampled on the feature maps of the network by spatial bilinear
resampling. In addition, a generative adversarial network is integrated into
our network framework to augment positive samples and improve the tracking
performance. Extensive experiments on benchmark datasets demonstrate that the
proposed method achieves a comparable performance to state-of-the-art trackers
and accelerates tracking-by-detection trackers based on raw-image samples
effectively
One Bit Matters: Understanding Adversarial Examples as the Abuse of Redundancy
Despite the great success achieved in machine learning (ML), adversarial
examples have caused concerns with regards to its trustworthiness: A small
perturbation of an input results in an arbitrary failure of an otherwise
seemingly well-trained ML model. While studies are being conducted to discover
the intrinsic properties of adversarial examples, such as their transferability
and universality, there is insufficient theoretic analysis to help understand
the phenomenon in a way that can influence the design process of ML
experiments. In this paper, we deduce an information-theoretic model which
explains adversarial attacks as the abuse of feature redundancies in ML
algorithms. We prove that feature redundancy is a necessary condition for the
existence of adversarial examples. Our model helps to explain some major
questions raised in many anecdotal studies on adversarial examples. Our theory
is backed up by empirical measurements of the information content of benign and
adversarial examples on both image and text datasets. Our measurements show
that typical adversarial examples introduce just enough redundancy to overflow
the decision making of an ML model trained on corresponding benign examples. We
conclude with actionable recommendations to improve the robustness of machine
learners against adversarial examples
- …