1,838 research outputs found
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Deep Neural Network (DNN) workloads are quickly moving from datacenters onto
edge devices, for latency, privacy, or energy reasons. While datacenter
networks can be protected using conventional cybersecurity measures, edge
neural networks bring a host of new security challenges. Unlike classic IoT
applications, edge neural networks are typically very compute and memory
intensive, their execution is data-independent, and they are robust to noise
and faults. Neural network models may be very expensive to develop, and can
potentially reveal information about the private data they were trained on,
requiring special care in distribution. The hidden states and outputs of the
network can also be used in reconstructing user inputs, potentially violating
users' privacy. Furthermore, neural networks are vulnerable to adversarial
attacks, which may cause misclassifications and violate the integrity of the
output. These properties add challenges when securing edge-deployed DNNs,
requiring new considerations, threat models, priorities, and approaches in
securely and privately deploying DNNs to the edge. In this work, we cover the
landscape of attacks on, and defenses, of neural networks deployed in edge
devices and provide a taxonomy of attacks and defenses targeting edge DNNs
Security and Privacy Issues in Deep Learning
With the development of machine learning (ML), expectations for artificial
intelligence (AI) technology have been increasing daily. In particular, deep
neural networks have shown outstanding performance results in many fields. Many
applications are deeply involved in our daily life, such as making significant
decisions in application areas based on predictions or classifications, in
which a DL model could be relevant. Hence, if a DL model causes mispredictions
or misclassifications due to malicious external influences, then it can cause
very large difficulties in real life. Moreover, training DL models involve an
enormous amount of data and the training data often include sensitive
information. Therefore, DL models should not expose the privacy of such data.
In this paper, we review the vulnerabilities and the developed defense methods
on the security of the models and data privacy under the notion of secure and
private AI (SPAI). We also discuss current challenges and open issues
Convolutional Neural Networks with Transformed Input based on Robust Tensor Network Decomposition
Tensor network decomposition, originated from quantum physics to model
entangled many-particle quantum systems, turns out to be a promising
mathematical technique to efficiently represent and process big data in
parsimonious manner. In this study, we show that tensor networks can
systematically partition structured data, e.g. color images, for distributed
storage and communication in privacy-preserving manner. Leveraging the sea of
big data and metadata privacy, empirical results show that neighbouring
subtensors with implicit information stored in tensor network formats cannot be
identified for data reconstruction. This technique complements the existing
encryption and randomization techniques which store explicit data
representation at one place and highly susceptible to adversarial attacks such
as side-channel attacks and de-anonymization. Furthermore, we propose a theory
for adversarial examples that mislead convolutional neural networks to
misclassification using subspace analysis based on singular value decomposition
(SVD). The theory is extended to analyze higher-order tensors using
tensor-train SVD (TT-SVD); it helps to explain the level of susceptibility of
different datasets to adversarial attacks, the structural similarity of
different adversarial attacks including global and localized attacks, and the
efficacy of different adversarial defenses based on input transformation. An
efficient and adaptive algorithm based on robust TT-SVD is then developed to
detect strong and static adversarial attacks
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
Connected and autonomous vehicles (CAVs) will form the backbone of future
next-generation intelligent transportation systems (ITS) providing travel
comfort, road safety, along with a number of value-added services. Such a
transformation---which will be fuelled by concomitant advances in technologies
for machine learning (ML) and wireless communications---will enable a future
vehicular ecosystem that is better featured and more efficient. However, there
are lurking security problems related to the use of ML in such a critical
setting where an incorrect ML decision may not only be a nuisance but can lead
to loss of precious lives. In this paper, we present an in-depth overview of
the various challenges associated with the application of ML in vehicular
networks. In addition, we formulate the ML pipeline of CAVs and present various
potential security issues associated with the adoption of ML methods. In
particular, we focus on the perspective of adversarial ML attacks on CAVs and
outline a solution to defend against adversarial attacks in multiple settings
EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report)
Deep neural networks (DNNs) are inherently vulnerable to adversarial inputs:
such maliciously crafted samples trigger DNNs to misbehave, leading to
detrimental consequences for DNN-powered systems. The fundamental challenges of
mitigating adversarial inputs stem from their adaptive and variable nature.
Existing solutions attempt to improve DNN resilience against specific attacks;
yet, such static defenses can often be circumvented by adaptively engineered
inputs or by new attack variants.
Here, we present EagleEye, an attack-agnostic adversarial tampering analysis
engine for DNN-powered systems. Our design exploits the {\em minimality
principle} underlying many attacks: to maximize the attack's evasiveness, the
adversary often seeks the minimum possible distortion to convert genuine inputs
to adversarial ones. We show that this practice entails the distinct
distributional properties of adversarial inputs in the input space. By
leveraging such properties in a principled manner, EagleEye effectively
discriminates adversarial inputs and even uncovers their correct classification
outputs. Through extensive empirical evaluation using a range of benchmark
datasets and DNN models, we validate EagleEye's efficacy. We further
investigate the adversary's possible countermeasures, which implies a difficult
dilemma for her: to evade EagleEye's detection, excessive distortion is
necessary, thereby significantly reducing the attack's evasiveness regarding
other detection mechanisms
Privacy in Deep Learning: A Survey
The ever-growing advances of deep learning in many areas including vision,
recommendation systems, natural language processing, etc., have led to the
adoption of Deep Neural Networks (DNNs) in production systems. The availability
of large datasets and high computational power are the main contributors to
these advances. The datasets are usually crowdsourced and may contain sensitive
information. This poses serious privacy concerns as this data can be misused or
leaked through various vulnerabilities. Even if the cloud provider and the
communication link is trusted, there are still threats of inference attacks
where an attacker could speculate properties of the data used for training, or
find the underlying model architecture and parameters. In this survey, we
review the privacy concerns brought by deep learning, and the mitigating
techniques introduced to tackle these issues. We also show that there is a gap
in the literature regarding test-time inference privacy, and propose possible
future research directions
Towards a Robust Deep Neural Network in Texts: A Survey
Deep neural networks (DNNs) have achieved remarkable success in various tasks
(e.g., image classification, speech recognition, and natural language
processing). However, researches have shown that DNN models are vulnerable to
adversarial examples, which cause incorrect predictions by adding imperceptible
perturbations into normal inputs. Studies on adversarial examples in image
domain have been well investigated, but in texts the research is not enough,
let alone a comprehensive survey in this field. In this paper, we aim at
presenting a comprehensive understanding of adversarial attacks and
corresponding mitigation strategies in texts. Specifically, we first give a
taxonomy of adversarial attacks and defenses in texts from the perspective of
different natural language processing (NLP) tasks, and then introduce how to
build a robust DNN model via testing and verification. Finally, we discuss the
existing challenges of adversarial attacks and defenses in texts and present
the future research directions in this emerging field
Enabling Trust in Deep Learning Models: A Digital Forensics Case Study
Today, the volume of evidence collected per case is growing exponentially, to
address this problem forensics investigators are looking for investigation
process with tools built on new technologies like big data, cloud services, and
Deep Learning (DL) techniques. Consequently, the accuracy of artifacts found
also relies on the performance of techniques used, especially DL models.
Recently, \textbf{D}eep \textbf{N}eural \textbf{N}ets (\textbf{DNN}) have
achieved state of the art performance in the tasks of classification and
recognition. In the context of digital forensics, DNN has been applied to the
domains of cybercrime investigation such as child abuse investigations, malware
classification, steganalysis and image forensics. However, the robustness of
DNN models in the context of digital forensics is never studied before. Hence,
in this research, we design and implement a domain-independent Adversary
Testing Framework (ATF) to test the security robustness of black-box DNN's. By
using ATF, we also methodically test a commercially available DNN service used
in forensic investigations and bypass the detection, where published methods
fail in control settings.Comment: 6 pages, Presented at 17th IEEE International Conference On Trust,
Security And Privacy In Computing And Communications 201
BoMaNet: Boolean Masking of an Entire Neural Network
Recent work on stealing machine learning (ML) models from inference engines
with physical side-channel attacks warrant an urgent need for effective
side-channel defenses. This work proposes the first
neural network inference engine design.
Masking uses secure multi-party computation to split the secrets into random
shares and to decorrelate the statistical relation of secret-dependent
computations to side-channels (e.g., the power draw). In this work, we
construct secure hardware primitives to mask the linear and
non-linear operations in a neural network. We address the challenge of masking
integer addition by converting each addition into a sequence of XOR and AND
gates and by augmenting Trichina's secure Boolean masking style. We improve the
traditional Trichina's AND gates by adding pipelining elements for better
glitch-resistance and we architect the whole design to sustain a throughput of
1 masked addition per cycle.
We implement the proposed secure inference engine on a Xilinx Spartan-6
(XC6SLX75) FPGA. The results show that masking incurs an overhead of 3.5\% in
latency and 5.9 in area. Finally, we demonstrate the security of the
masked design with 2M traces
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models
Nowadays, autonomous driving has attracted much attention from both industry
and academia. Convolutional neural network (CNN) is a key component in
autonomous driving, which is also increasingly adopted in pervasive computing
such as smartphones, wearable devices, and IoT networks. Prior work shows
CNN-based classification models are vulnerable to adversarial attacks. However,
it is uncertain to what extent regression models such as driving models are
vulnerable to adversarial attacks, the effectiveness of existing defense
techniques, and the defense implications for system and middleware builders.
This paper presents an in-depth analysis of five adversarial attacks and four
defense methods on three driving models. Experiments show that, similar to
classification models, these models are still highly vulnerable to adversarial
attacks. This poses a big security threat to autonomous driving and thus should
be taken into account in practice. While these defense methods can effectively
defend against different attacks, none of them are able to provide adequate
protection against all five attacks. We derive several implications for system
and middleware builders: (1) when adding a defense component against
adversarial attacks, it is important to deploy multiple defense methods in
tandem to achieve a good coverage of various attacks, (2) a blackbox attack is
much less effective compared with a white-box attack, implying that it is
important to keep model details (e.g., model architecture, hyperparameters)
confidential via model obfuscation, and (3) driving models with a complex
architecture are preferred if computing resources permit as they are more
resilient to adversarial attacks than simple models
- …