6 research outputs found
Robust Backdoor Attacks on Object Detection in Real World
Deep learning models are widely deployed in many applications, such as object
detection in various security fields. However, these models are vulnerable to
backdoor attacks. Most backdoor attacks were intensively studied on classified
models, but little on object detection. Previous works mainly focused on the
backdoor attack in the digital world, but neglect the real world. Especially,
the backdoor attack's effect in the real world will be easily influenced by
physical factors like distance and illumination. In this paper, we proposed a
variable-size backdoor trigger to adapt to the different sizes of attacked
objects, overcoming the disturbance caused by the distance between the viewing
point and attacked object. In addition, we proposed a backdoor training named
malicious adversarial training, enabling the backdoor object detector to learn
the feature of the trigger with physical noise. The experiment results show
this robust backdoor attack (RBA) could enhance the attack success rate in the
real world.Comment: 22 pages, 13figure
Is It Possible to Backdoor Face Forgery Detection with Natural Triggers?
Deep neural networks have significantly improved the performance of face
forgery detection models in discriminating Artificial Intelligent Generated
Content (AIGC). However, their security is significantly threatened by the
injection of triggers during model training (i.e., backdoor attacks). Although
existing backdoor defenses and manual data selection can mitigate those using
human-eye-sensitive triggers, such as patches or adversarial noises, the more
challenging natural backdoor triggers remain insufficiently researched. To
further investigate natural triggers, we propose a novel analysis-by-synthesis
backdoor attack against face forgery detection models, which embeds natural
triggers in the latent space. We thoroughly study such backdoor vulnerability
from two perspectives: (1) Model Discrimination (Optimization-Based Trigger):
we adopt a substitute detection model and find the trigger by minimizing the
cross-entropy loss; (2) Data Distribution (Custom Trigger): we manipulate the
uncommon facial attributes in the long-tailed distribution to generate poisoned
samples without the supervision from detection models. Furthermore, to
completely evaluate the detection models towards the latest AIGC, we utilize
both state-of-the-art StyleGAN and Stable Diffusion for trigger generation.
Finally, these backdoor triggers introduce specific semantic features to the
generated poisoned samples (e.g., skin textures and smile), which are more
natural and robust. Extensive experiments show that our method is superior from
three levels: (1) Attack Success Rate: ours achieves a high attack success rate
(over 99%) and incurs a small model accuracy drop (below 0.2%) with a low
poisoning rate (less than 3%); (2) Backdoor Defense: ours shows better robust
performance when faced with existing backdoor defense methods; (3) Human
Inspection: ours is less human-eye-sensitive from a comprehensive user study
Recommended from our members
The Symbioses of Oblivious Random Access Memory and Trusted Execution Environments
In recent years, Oblivious Random Access Memory (ORAM) controllers in Trusted Execution Environments (TEEs) have become a popular area of investigation, as coresident trusted systems allow for significantly more efficient oblivious execution. Further, in the case of Intel architectures, oblivious execution effectively eliminates the majority of confidentiality leakage holes in SGX. Unfortunately, the state of the art TEE-ORAM memory solutions for Intel SGX are still considered too slow for most applications, with memory block requests being handled at milliseconds latency. PRORAM, our novel oblivious memory controller, can deliver a block in the order of microseconds, approximately 10x–40x faster than prior work. This analysis will describe the design and implementation techniques that led to our significant performance gains
Backdoor Attacks and Defences on Deep Neural Networks
Nowadays, due to the huge amount of resources required for network training, pre-trained models are commonly exploited in all kinds of deep learning tasks, like image classification, natural language processing, etc. These models are directly deployed in the real environments, or only fine-tuned on a limited set of data that are collected, for instance, from the Internet. However, a natural question arises: can we trust pre-trained models or the data downloaded from the Internet? The answer is ‘No’. An attacker can easily perform a so-called backdoor attack to hide a backdoor into a pre-trained model by poisoning the dataset used for training or indirectly releasing some poisoned data on the Internet as a bait. Such an attack is stealthy since the hidden backdoor does not affect the behaviour of the network in normal operating conditions, and the malicious behaviour being activated only when a triggering signal is presented at the network input.
In this thesis, we present a general framework for backdoor attacks and defences, and overview the state-of-the-art backdoor attacks and the corresponding defences in the field image classification, by casting them in the introduced framework. By focusing on the face recognition domain, two new backdoor attacks were proposed, effective under different threat models. Finally, we design a universal method to defend against backdoor attacks, regardless of the specific attack setting, namely the poisoning strategy and the triggering signal
Data Mining
Data mining is a branch of computer science that is used to automatically extract meaningful, useful knowledge and previously unknown, hidden, interesting patterns from a large amount of data to support the decision-making process. This book presents recent theoretical and practical advances in the field of data mining. It discusses a number of data mining methods, including classification, clustering, and association rule mining. This book brings together many different successful data mining studies in various areas such as health, banking, education, software engineering, animal science, and the environment