3 research outputs found
Detection of Face Recognition Adversarial Attacks
Deep Learning methods have become state-of-the-art for solving tasks such as
Face Recognition (FR). Unfortunately, despite their success, it has been
pointed out that these learning models are exposed to adversarial inputs -
images to which an imperceptible amount of noise for humans is added to
maliciously fool a neural network - thus limiting their adoption in real-world
applications. While it is true that an enormous effort has been spent in order
to train robust models against this type of threat, adversarial detection
techniques have recently started to draw attention within the scientific
community. A detection approach has the advantage that it does not require to
re-train any model, thus it can be added on top of any system. In this context,
we present our work on adversarial samples detection in forensics mainly
focused on detecting attacks against FR systems in which the learning model is
typically used only as a features extractor. Thus, in these cases, train a more
robust classifier might not be enough to defence a FR system. In this frame,
the contribution of our work is four-fold: i) we tested our recently proposed
adversarial detection approach against classifier attacks, i.e. adversarial
samples crafted to fool a FR neural network acting as a classifier; ii) using a
k-Nearest Neighbor (kNN) algorithm as a guidance, we generated deep features
attacks against a FR system based on a DL model acting as features extractor,
followed by a kNN which gives back the query identity based on features
similarity; iii) we used the deep features attacks to fool a FR system on the
1:1 Face Verification task and we showed their superior effectiveness with
respect to classifier attacks in fooling such type of system; iv) we used the
detectors trained on classifier attacks to detect deep features attacks, thus
showing that such approach is generalizable to different types of offensives
MOCCA: Multi-Layer One-Class ClassificAtion for Anomaly Detection
Anomalies are ubiquitous in all scientific fields and can express an
unexpected event due to incomplete knowledge about the data distribution or an
unknown process that suddenly comes into play and distorts observations. Due to
such events' rarity, to train deep learning models on the Anomaly Detection
(AD) task, scientists only rely on "normal" data, i.e., non-anomalous samples.
Thus, letting the neural network infer the distribution beneath the input data.
In such a context, we propose a novel framework, named Multi-layer One-Class
ClassificAtion (MOCCA),to train and test deep learning models on the AD task.
Specifically, we applied it to autoencoders. A key novelty in our work stems
from the explicit optimization of intermediate representations for the AD task.
Indeed, differently from commonly used approaches that consider a neural
network as a single computational block, i.e., using the output of the last
layer only, MOCCA explicitly leverages the multi-layer structure of deep
architectures. Each layer's feature space is optimized for AD during training,
while in the test phase, the deep representations extracted from the trained
layers are combined to detect anomalies. With MOCCA, we split the training
process into two steps. First, the autoencoder is trained on the reconstruction
task only. Then, we only retain the encoder tasked with minimizing the L_2
distance between the output representation and a reference point, the
anomaly-free training data centroid, at each considered layer. Subsequently, we
combine the deep features extracted at the various trained layers of the
encoder model to detect anomalies at inference time. To assess the performance
of the models trained with MOCCA, we conduct extensive experiments on publicly
available datasets. We show that our proposed method reaches comparable or
superior performance to state-of-the-art approaches available in the
literature.Comment: The paper has been accepted for publication in the IEEE Transactions
on Neural Networks and Learning Systems, Special Issue on Deep Learning for
Anomaly Detectio
Adversarial Attacks against Face Recognition: A Comprehensive Study
Face recognition (FR) systems have demonstrated outstanding verification
performance, suggesting suitability for real-world applications ranging from
photo tagging in social media to automated border control (ABC). In an advanced
FR system with deep learning-based architecture, however, promoting the
recognition efficiency alone is not sufficient, and the system should also
withstand potential kinds of attacks designed to target its proficiency. Recent
studies show that (deep) FR systems exhibit an intriguing vulnerability to
imperceptible or perceptible but natural-looking adversarial input images that
drive the model to incorrect output predictions. In this article, we present a
comprehensive survey on adversarial attacks against FR systems and elaborate on
the competence of new countermeasures against them. Further, we propose a
taxonomy of existing attack and defense methods based on different criteria. We
compare attack methods on the orientation and attributes and defense approaches
on the category. Finally, we explore the challenges and potential research
direction