3 research outputs found
CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information
Machine learning has become mainstream across industries. Numerous examples
proved the validity of it for security applications. In this work, we
investigate how to reverse engineer a neural network by using only power
side-channel information. To this end, we consider a multilayer perceptron as
the machine learning architecture of choice and assume a non-invasive and
eavesdropping attacker capable of measuring only passive side-channel leakages
like power consumption, electromagnetic radiation, and reaction time.
We conduct all experiments on real data and common neural net architectures
in order to properly assess the applicability and extendability of those
attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our
experiments show that the side-channel attacker is capable of obtaining the
following information: the activation functions used in the architecture, the
number of layers and neurons in the layers, the number of output classes, and
weights in the neural network. Thus, the attacker can effectively reverse
engineer the network using side-channel information.
Next, we show that once the attacker has the knowledge about the neural
network architecture, he/she could also recover the inputs to the network with
only a single-shot measurement. Finally, we discuss several mitigations one
could use to thwart such attacks.Comment: 15 pages, 16 figure
A Survey of Techniques for Improving Security of GPUs
Graphics processing unit (GPU), although a powerful performance-booster, also
has many security vulnerabilities. Due to these, the GPU can act as a
safe-haven for stealthy malware and the weakest `link' in the security `chain'.
In this paper, we present a survey of techniques for analyzing and improving
GPU security. We classify the works on key attributes to highlight their
similarities and differences. More than informing users and researchers about
GPU security techniques, this survey aims to increase their awareness about GPU
security vulnerabilities and potential countermeasures
Defense against ML-based Power Side-channel Attacks on DNN Accelerators with Adversarial Attacks
Artificial Intelligence (AI) hardware accelerators have been widely adopted
to enhance the efficiency of deep learning applications. However, they also
raise security concerns regarding their vulnerability to power side-channel
attacks (SCA). In these attacks, the adversary exploits unintended
communication channels to infer sensitive information processed by the
accelerator, posing significant privacy and copyright risks to the models.
Advanced machine learning algorithms are further employed to facilitate the
side-channel analysis and exacerbate the privacy issue of AI accelerators.
Traditional defense strategies naively inject execution noise to the runtime of
AI models, which inevitably introduce large overheads.
In this paper, we present AIAShield, a novel defense methodology to safeguard
FPGA-based AI accelerators and mitigate model extraction threats via
power-based SCAs. The key insight of AIAShield is to leverage the prominent
adversarial attack technique from the machine learning community to craft
delicate noise, which can significantly obfuscate the adversary's side-channel
observation while incurring minimal overhead to the execution of the protected
model. At the hardware level, we design a new module based on ring oscillators
to achieve fine-grained noise generation. At the algorithm level, we repurpose
Neural Architecture Search to worsen the adversary's extraction results.
Extensive experiments on the Nvidia Deep Learning Accelerator (NVDLA)
demonstrate that AIAShield outperforms existing solutions with excellent
transferability