10 research outputs found
Recommended from our members
Exploiting Human Perception for Adversarial Attacks
There has been a significant amount of recent work towards fooling deep-learning-based classifiers, particularly for images, via adversarial inputs that are perceptually similar to benign examples. However, researchers typically use minimization of the -norm as a proxy for imperceptibility, an approach that oversimplifies the complexity of real-world images and human visual perception. We exploit the relationship between image features and human perception to propose a \textit{Perceptual Loss (PL)} metric to better capture human imperceptibly during the generation of adversarial images. By focusing on human perceptible distortion of image features, the metric yields better visual quality adversarial images as our experiments validate. Our results also demonstrate the effectiveness and efficiency of our algorithm
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Transfer adversarial attacks raise critical security concerns in real-world,
black-box scenarios. However, the actual progress of this field is difficult to
assess due to two common limitations in existing evaluations. First, different
methods are often not systematically and fairly evaluated in a one-to-one
comparison. Second, only transferability is evaluated but another key attack
property, stealthiness, is largely overlooked. In this work, we design good
practices to address these limitations, and we present the first comprehensive
evaluation of transfer attacks, covering 23 representative attacks against 9
defenses on ImageNet. In particular, we propose to categorize existing attacks
into five categories, which enables our systematic category-wise analyses.
These analyses lead to new findings that even challenge existing knowledge and
also help determine the optimal attack hyperparameters for our attack-wise
comprehensive evaluation. We also pay particular attention to stealthiness, by
adopting diverse imperceptibility metrics and looking into new, finer-grained
characteristics. Overall, our new insights into transferability and
stealthiness lead to actionable good practices for future evaluations.Comment: An extended version can be found at arXiv:2310.11850. Code and a list
of categorized attacks are available at
https://github.com/ZhengyuZhao/TransferAttackEva
Decoding Neural Signals with Computational Models: A Systematic Review of Invasive BMI
There are significant milestones in modern human's civilization in which
mankind stepped into a different level of life with a new spectrum of
possibilities and comfort. From fire-lighting technology and wheeled wagons to
writing, electricity and the Internet, each one changed our lives dramatically.
In this paper, we take a deep look into the invasive Brain Machine Interface
(BMI), an ambitious and cutting-edge technology which has the potential to be
another important milestone in human civilization. Not only beneficial for
patients with severe medical conditions, the invasive BMI technology can
significantly impact different technologies and almost every aspect of human's
life. We review the biological and engineering concepts that underpin the
implementation of BMI applications. There are various essential techniques that
are necessary for making invasive BMI applications a reality. We review these
through providing an analysis of (i) possible applications of invasive BMI
technology, (ii) the methods and devices for detecting and decoding brain
signals, as well as (iii) possible options for stimulating signals into human's
brain. Finally, we discuss the challenges and opportunities of invasive BMI for
further development in the area.Comment: 51 pages, 14 figures, review articl
An Analysis on Adversarial Machine Learning: Methods and Applications
Deep learning has witnessed astonishing advancement in the last decade and revolutionized many fields ranging from computer vision to natural language processing. A prominent field of research that enabled such achievements is adversarial learning, investigating the behavior and functionality of a learning model in presence of an adversary. Adversarial learning consists of two major trends. The first trend analyzes the susceptibility of machine learning models to manipulation in the decision-making process and aims to improve the robustness to such manipulations. The second trend exploits adversarial games between components of the model to enhance the learning process. This dissertation aims to provide an analysis on these two sides of adversarial learning and harness their potential for improving the robustness and generalization of deep models.
In the first part of the dissertation, we study the adversarial susceptibility of deep learning models. We provide an empirical analysis on the extent of vulnerability by proposing two adversarial attacks that explore the geometric and frequency-domain characteristics of inputs to manipulate deep decisions. Afterward, we formalize the susceptibility of deep networks using the first-order approximation of the predictions and extend the theory to the ensemble classification scheme. Inspired by theoretical findings, we formalize a reliable and practical defense against adversarial examples to robustify ensembles. We extend this part by investigating the shortcomings of \gls{at} and highlight that the popular momentum stochastic gradient descent, developed essentially for natural training, is not proper for optimization in adversarial training since it is not designed to be robust against the chaotic behavior of gradients in this setup. Motivated by these observations, we develop an optimization method that is more suitable for adversarial training. In the second part of the dissertation, we harness adversarial learning to enhance the generalization and performance of deep networks in discriminative and generative tasks. We develop several models for biometric identification including fingerprint distortion rectification and latent fingerprint reconstruction. In particular, we develop a ridge reconstruction model based on generative adversarial networks that estimates the missing ridge information in latent fingerprints. We introduce a novel modification that enables the generator network to preserve the ID information during the reconstruction process. To address the scarcity of data, {\it e.g.}, in latent fingerprint analysis, we develop a supervised augmentation technique that combines input examples based on their salient regions. Our findings advocate that adversarial learning improves the performance and reliability of deep networks in a wide range of applications
Smooth Adversarial Examples
This paper investigates the visual quality of the adver-sarial examples. Recent papers propose to smooth the perturbations to get rid of high frequency artefacts. In this work, smoothing has a different meaning as it perceptually shapes the perturbation according to the visual content of the image to be attacked. The perturbation becomes locally smooth on the flat areas of the input image, but it may be noisy on its textured areas and sharp across its edges. This operation relies on Laplacian smoothing, well-known in graph signal processing, which we integrate in the attack pipeline. We benchmark several attacks with and without smoothing under a white-box scenario and evaluate their transferability. Despite the additional constraint of smoothness, our attack has the same probability of success at lower distortion