39,972 research outputs found
Energy Confused Adversarial Metric Learning for Zero-Shot Image Retrieval and Clustering
Deep metric learning has been widely applied in many computer vision tasks,
and recently, it is more attractive in \emph{zero-shot image retrieval and
clustering}(ZSRC) where a good embedding is requested such that the unseen
classes can be distinguished well. Most existing works deem this 'good'
embedding just to be the discriminative one and thus race to devise powerful
metric objectives or hard-sample mining strategies for leaning discriminative
embedding. However, in this paper, we first emphasize that the generalization
ability is a core ingredient of this 'good' embedding as well and largely
affects the metric performance in zero-shot settings as a matter of fact. Then,
we propose the Energy Confused Adversarial Metric Learning(ECAML) framework to
explicitly optimize a robust metric. It is mainly achieved by introducing an
interesting Energy Confusion regularization term, which daringly breaks away
from the traditional metric learning idea of discriminative objective devising,
and seeks to 'confuse' the learned model so as to encourage its generalization
ability by reducing overfitting on the seen classes. We train this confusion
term together with the conventional metric objective in an adversarial manner.
Although it seems weird to 'confuse' the network, we show that our ECAML indeed
serves as an efficient regularization technique for metric learning and is
applicable to various conventional metric methods. This paper empirically and
experimentally demonstrates the importance of learning embedding with good
generalization, achieving state-of-the-art performances on the popular CUB,
CARS, Stanford Online Products and In-Shop datasets for ZSRC tasks.
\textcolor[rgb]{1, 0, 0}{Code available at http://www.bhchen.cn/}.Comment: AAAI 2019, Spotligh
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking
Deep Reinforcement Learning (RL) agents are susceptible to adversarial noise
in their observations that can mislead their policies and decrease their
performance. However, an adversary may be interested not only in decreasing the
reward, but also in modifying specific temporal logic properties of the policy.
This paper presents a metric that measures the exact impact of adversarial
attacks against such properties. We use this metric to craft optimal
adversarial attacks. Furthermore, we introduce a model checking method that
allows us to verify the robustness of RL policies against adversarial attacks.
Our empirical analysis confirms (1) the quality of our metric to craft
adversarial attacks against temporal logic properties, and (2) that we are able
to concisely assess a system's robustness against attacks.Comment: ICAART 2023 Paper (Technical Report
Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning
Transfer learning has become a common practice for training deep learning
models with limited labeled data in a target domain. On the other hand, deep
models are vulnerable to adversarial attacks. Though transfer learning has been
widely applied, its effect on model robustness is unclear. To figure out this
problem, we conduct extensive empirical evaluations to show that fine-tuning
effectively enhances model robustness under white-box FGSM attacks. We also
propose a black-box attack method for transfer learning models which attacks
the target model with the adversarial examples produced by its source model. To
systematically measure the effect of both white-box and black-box attacks, we
propose a new metric to evaluate how transferable are the adversarial examples
produced by a source model to a target model. Empirical results show that the
adversarial examples are more transferable when fine-tuning is used than they
are when the two networks are trained independently
- …