2,001 research outputs found
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Artificial Intelligence (AI) systems such as autonomous vehicles, facial
recognition, and speech recognition systems are increasingly integrated into
our daily lives. However, despite their utility, these AI systems are
vulnerable to a wide range of attacks such as adversarial, backdoor, data
poisoning, membership inference, model inversion, and model stealing attacks.
In particular, numerous attacks are designed to target a particular model or
system, yet their effects can spread to additional targets, referred to as
transferable attacks. Although considerable efforts have been directed toward
developing transferable attacks, a holistic understanding of the advancements
in transferable attacks remains elusive. In this paper, we comprehensively
explore learning-based attacks from the perspective of transferability,
particularly within the context of cyber-physical security. We delve into
different domains -- the image, text, graph, audio, and video domains -- to
highlight the ubiquitous and pervasive nature of transferable attacks. This
paper categorizes and reviews the architecture of existing attacks from various
viewpoints: data, process, model, and system. We further examine the
implications of transferable attacks in practical scenarios such as autonomous
driving, speech recognition, and large language models (LLMs). Additionally, we
outline the potential research directions to encourage efforts in exploring the
landscape of transferable attacks. This survey offers a holistic understanding
of the prevailing transferable attacks and their impacts across different
domains
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Malware still constitutes a major threat in the cybersecurity landscape, also
due to the widespread use of infection vectors such as documents. These
infection vectors hide embedded malicious code to the victim users,
facilitating the use of social engineering techniques to infect their machines.
Research showed that machine-learning algorithms provide effective detection
mechanisms against such threats, but the existence of an arms race in
adversarial settings has recently challenged such systems. In this work, we
focus on malware embedded in PDF files as a representative case of such an arms
race. We start by providing a comprehensive taxonomy of the different
approaches used to generate PDF malware, and of the corresponding
learning-based detection systems. We then categorize threats specifically
targeted against learning-based PDF malware detectors, using a well-established
framework in the field of adversarial machine learning. This framework allows
us to categorize known vulnerabilities of learning-based PDF malware detectors
and to identify novel attacks that may threaten such systems, along with the
potential defense mechanisms that can mitigate the impact of such threats. We
conclude the paper by discussing how such findings highlight promising research
directions towards tackling the more general challenge of designing robust
malware detectors in adversarial settings
- …