44,248 research outputs found
Adversarial Attack and Defense on Graph Data: A Survey
Deep neural networks (DNNs) have been widely applied to various applications
including image classification, text generation, audio recognition, and graph
data analysis. However, recent studies have shown that DNNs are vulnerable to
adversarial attacks. Though there are several works studying adversarial attack
and defense strategies on domains such as images and natural language
processing, it is still difficult to directly transfer the learned knowledge to
graph structure data due to its representation challenges. Given the importance
of graph analysis, an increasing number of works start to analyze the
robustness of machine learning models on graph data. Nevertheless, current
studies considering adversarial behaviors on graph data usually focus on
specific types of attacks with certain assumptions. In addition, each work
proposes its own mathematical formulation which makes the comparison among
different methods difficult. Therefore, in this paper, we aim to survey
existing adversarial learning strategies on graph data and first provide a
unified formulation for adversarial learning on graph data which covers most
adversarial learning studies on graph. Moreover, we also compare different
attacks and defenses on graph data and discuss their corresponding
contributions and limitations. In this work, we systemically organize the
considered works based on the features of each topic. This survey not only
serves as a reference for the research community, but also brings a clear image
researchers outside this research domain. Besides, we also create an online
resource and keep updating the relevant papers during the last two years. More
details of the comparisons of various studies based on this survey are
open-sourced at
https://github.com/YingtongDou/graph-adversarial-learning-literature.Comment: In submission to Journal. For more open-source and up-to-date
information, please check our Github repository:
https://github.com/YingtongDou/graph-adversarial-learning-literatur
Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics
Mouse dynamics is a potential means of authenticating users. Typically, the
authentication process is based on classical machine learning techniques, but
recently, deep learning techniques have been introduced for this purpose.
Although prior research has demonstrated how machine learning and deep learning
algorithms can be bypassed by carefully crafted adversarial samples, there has
been very little research performed on the topic of behavioural biometrics in
the adversarial domain. In an attempt to address this gap, we built a set of
attacks, which are applications of several generative approaches, to construct
adversarial mouse trajectories that bypass authentication models. These
generated mouse sequences will serve as the adversarial samples in the context
of our experiments. We also present an analysis of the attack approaches we
explored, explaining their limitations. In contrast to previous work, we
consider the attacks in a more realistic and challenging setting in which an
attacker has access to recorded user data but does not have access to the
authentication model or its outputs. We explore three different attack
strategies: 1) statistics-based, 2) imitation-based, and 3) surrogate-based; we
show that they are able to evade the functionality of the authentication
models, thereby impacting their robustness adversely. We show that
imitation-based attacks often perform better than surrogate-based attacks,
unless, however, the attacker can guess the architecture of the authentication
model. In such cases, we propose a potential detection mechanism against
surrogate-based attacks.Comment: Accepted in 2019 International Joint Conference on Neural Networks
(IJCNN). Update of DO
Deviations in Representations Induced by Adversarial Attacks
Deep learning has been a popular topic and has achieved success in many
areas. It has drawn the attention of researchers and machine learning
practitioners alike, with developed models deployed to a variety of settings.
Along with its achievements, research has shown that deep learning models are
vulnerable to adversarial attacks. This finding brought about a new direction
in research, whereby algorithms were developed to attack and defend vulnerable
networks. Our interest is in understanding how these attacks effect change on
the intermediate representations of deep learning models. We present a method
for measuring and analyzing the deviations in representations induced by
adversarial attacks, progressively across a selected set of layers. Experiments
are conducted using an assortment of attack algorithms, on the CIFAR-10
dataset, with plots created to visualize the impact of adversarial attacks
across different layers in a network
Recommended from our members
Towards More Scalable and Robust Machine Learning
For many data-intensive real-world applications, such as recognizing objects from images, detecting spam emails, and recommending items on retail websites, the most successful current approaches involve learning rich prediction rules from large datasets. There are many challenges in these machine learning tasks. For example, as the size of the datasets and the complexity of these prediction rules increase, there is a significant challenge in designing scalable methods that can effectively exploit the availability of distributed computing units. As another example, in many machine learning applications, there can be data corruptions, communication errors, and even adversarial attacks during training and test. Therefore, to build reliable machine learning models, we also have to tackle the challenge of robustness in machine learning.In this dissertation, we study several topics on the scalability and robustness in large-scale learning, with a focus of establishing solid theoretical foundations for these problems, and demonstrate recent progress towards the ambitious goal of building more scalable and robust machine learning models. We start with the speedup saturation problem in distributed stochastic gradient descent (SGD) algorithms with large mini-batches. We introduce the notion of gradient diversity, a metric of the dissimilarity between concurrent gradient updates, and show its key role in the convergence and generalization performance of mini-batch SGD. We then move forward to Byzantine distributed learning, a topic that involves both scalability and robustness in distributed learning. In the Byzantine setting that we consider, a fraction of distributed worker machines can have arbitrary or even adversarial behavior. We design statistically and computationally efficient algorithms to defend against Byzantine failures in distributed optimization with convex and non-convex objectives. Lastly, we discuss the adversarial example phenomenon. We provide theoretical analysis of the adversarially robust generalization properties of machine learning models through the lens of Radamacher complexity
- …