3,175 research outputs found
CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information
Machine learning has become mainstream across industries. Numerous examples
proved the validity of it for security applications. In this work, we
investigate how to reverse engineer a neural network by using only power
side-channel information. To this end, we consider a multilayer perceptron as
the machine learning architecture of choice and assume a non-invasive and
eavesdropping attacker capable of measuring only passive side-channel leakages
like power consumption, electromagnetic radiation, and reaction time.
We conduct all experiments on real data and common neural net architectures
in order to properly assess the applicability and extendability of those
attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our
experiments show that the side-channel attacker is capable of obtaining the
following information: the activation functions used in the architecture, the
number of layers and neurons in the layers, the number of output classes, and
weights in the neural network. Thus, the attacker can effectively reverse
engineer the network using side-channel information.
Next, we show that once the attacker has the knowledge about the neural
network architecture, he/she could also recover the inputs to the network with
only a single-shot measurement. Finally, we discuss several mitigations one
could use to thwart such attacks.Comment: 15 pages, 16 figure
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
With the widespread use of machine learning (ML) techniques, ML as a service
has become increasingly popular. In this setting, an ML model resides on a
server and users can query it with their data via an API. However, if the
user's input is sensitive, sending it to the server is undesirable and
sometimes even legally not possible. Equally, the service provider does not
want to share the model by sending it to the client for protecting its
intellectual property and pay-per-query business model.
In this paper, we propose MLCapsule, a guarded offline deployment of machine
learning as a service. MLCapsule executes the model locally on the user's side
and therefore the data never leaves the client. Meanwhile, MLCapsule offers the
service provider the same level of control and security of its model as the
commonly used server-side execution. In addition, MLCapsule is applicable to
offline applications that require local execution. Beyond protecting against
direct model access, we couple the secure offline deployment with defenses
against advanced attacks on machine learning models such as model stealing,
reverse engineering, and membership inference
FMT: Removing Backdoor Feature Maps via Feature Map Testing in Deep Neural Networks
Deep neural networks have been widely used in many critical applications,
such as autonomous vehicles and medical diagnosis. However, their security is
threatened by backdoor attack, which is achieved by adding artificial patterns
to specific training data. Existing defense strategies primarily focus on using
reverse engineering to reproduce the backdoor trigger generated by attackers
and subsequently repair the DNN model by adding the trigger into inputs and
fine-tuning the model with ground-truth labels. However, once the trigger
generated by the attackers is complex and invisible, the defender can not
successfully reproduce the trigger. Consequently, the DNN model will not be
repaired since the trigger is not effectively removed.
In this work, we propose Feature Map Testing~(FMT). Different from existing
defense strategies, which focus on reproducing backdoor triggers, FMT tries to
detect the backdoor feature maps, which are trained to extract backdoor
information from the inputs. After detecting these backdoor feature maps, FMT
will erase them and then fine-tune the model with a secure subset of training
data. Our experiments demonstrate that, compared to existing defense
strategies, FMT can effectively reduce the Attack Success Rate (ASR) even
against the most complex and invisible attack triggers. Second, unlike
conventional defense methods that tend to exhibit low Robust Accuracy (i.e.,
the model's accuracy on the poisoned data), FMT achieves higher RA, indicating
its superiority in maintaining model performance while mitigating the effects
of backdoor attacks~(e.g., FMT obtains 87.40\% RA in CIFAR10). Third, compared
to existing feature map pruning techniques, FMT can cover more backdoor feature
maps~(e.g., FMT removes 83.33\% of backdoor feature maps from the model in the
CIFAR10 \& BadNet scenario).Comment: 12 pages, 4 figure
Mitigating Backdoors within Deep Neural Networks in Data-limited Configuration
As the capacity of deep neural networks (DNNs) increases, their need for huge
amounts of data significantly grows. A common practice is to outsource the
training process or collect more data over the Internet, which introduces the
risks of a backdoored DNN. A backdoored DNN shows normal behavior on clean data
while behaving maliciously once a trigger is injected into a sample at the test
time. In such cases, the defender faces multiple difficulties. First, the
available clean dataset may not be sufficient for fine-tuning and recovering
the backdoored DNN. Second, it is impossible to recover the trigger in many
real-world applications without information about it. In this paper, we
formulate some characteristics of poisoned neurons. This backdoor
suspiciousness score can rank network neurons according to their activation
values, weights, and their relationship with other neurons in the same layer.
Our experiments indicate the proposed method decreases the chance of attacks
being successful by more than 50% with a tiny clean dataset, i.e., ten clean
samples for the CIFAR-10 dataset, without significantly deteriorating the
model's performance. Moreover, the proposed method runs three times as fast as
baselines
TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents
Recent work has identified that classification models implemented as
neural networks are vulnerable to
data-poisoning and Trojan attacks at training time.
In this work, we show that these
training-time vulnerabilities extend to
deep reinforcement learning (DRL) agents
and can be exploited by an adversary with access
to the training process.
In particular, we focus on
Trojan attacks that augment the function of
reinforcement learning policies
with hidden behaviors.
We demonstrate that such attacks can be implemented
through minuscule data poisoning (as little as 0.025% of the training data) and
in-band
reward modification that does not affect
the reward on normal inputs.
The policies learned with our proposed attack approach perform imperceptibly similar to benign policies but deteriorate drastically when the Trojan is triggered
in both targeted and untargeted settings.
Furthermore, we show that existing Trojan defense mechanisms for classification tasks are not effective in the reinforcement learning setting
Towards a Robust Defense: A Multifaceted Approach to the Detection and Mitigation of Neural Backdoor Attacks through Feature Space Exploration and Analysis
From voice assistants to self-driving vehicles, machine learning(ML), especially deep learning, revolutionizes the way we work and live, through the wide adoption in a broad range of applications. Unfortunately, this widespread use makes deep learning-based systems a desirable target for cyberattacks, such as generating adversarial examples to fool a deep learning system to make wrong decisions. In particular, many recent studies have revealed that attackers can corrupt the training of a deep learning model, e.g., through data poisoning, or distribute a deep learning model they created with “backdoors” planted, e.g., distributed as part of a software library, so that the attacker can easily craft system inputs that grant unauthorized access or lead to catastrophic errors or failures.
This dissertation aims to develop a multifaceted approach for detecting and mitigating such neural backdoor attacks by exploiting their unique characteristics in the feature space. First of all, a framework called GangSweep is designed to utilize the capabilities of Generative Adversarial Networks (GAN) to approximate poisoned sample distributions in the feature space, to detect neural backdoor attacks. Unlike conventional methods, GangSweep exposes all attacker-induced artifacts, irrespective of their complexity or obscurity. By leveraging the statistical disparities between these artifacts and natural adversarial perturbations, an efficient detection scheme is devised. Accordingly, the backdoored model can be purified through label correction and fine-tuning
Secondly, this dissertation focuses on the sample-targeted backdoor attacks, a variant of neural backdoor that targets specific samples. Given the absence of explicit triggers in such models, traditional detection methods falter. Through extensive analysis, I have identified a unique feature space property of these attacks, where they induce boundary alterations, creating discernible “pockets” around target samples. Based on this critical observation, I introduce a novel defense scheme that encapsulates these malicious pockets within a tight convex hull in the feature space, and then design an algorithm to identify such hulls and remove the backdoor through model fine-tuning. The algorithm demonstrates high efficacy against a spectrum of sample-targeted backdoor attacks.
Lastly, I address the emerging challenge of backdoor attacks in multimodal deep neural networks, in particular vision-language model, a growing concern in real-world applications. Discovering that there is a strong association between the image trigger and the target text in the feature space of the backdoored vision-language model, I design an effective algorithm to expose the malicious text and image trigger by jointly searching in the shared feature space of the vision and language modalities
- …