13 research outputs found
Graph Neural Networks for Hardware Vulnerability Analysis -- Can you Trust your GNN?
The participation of third-party entities in the globalized semiconductor
supply chain introduces potential security vulnerabilities, such as
intellectual property piracy and hardware Trojan (HT) insertion. Graph neural
networks (GNNs) have been employed to address various hardware security
threats, owing to their superior performance on graph-structured data, such as
circuits. However, GNNs are also susceptible to attacks. This work examines the
use of GNNs for detecting hardware threats like HTs and their vulnerability to
attacks. We present BadGNN, a backdoor attack on GNNs that can hide HTs and
evade detection with a 100% success rate through minor circuit perturbations.
Our findings highlight the need for further investigation into the security and
robustness of GNNs before they can be safely used in security-critical
applications.Comment: Will be presented at 2023 IEEE VLSI Test Symposium (VTS
Adversarial Camouflage for Node Injection Attack on Graphs
Node injection attacks against Graph Neural Networks (GNNs) have received
emerging attention as a practical attack scenario, where the attacker injects
malicious nodes instead of modifying node features or edges to degrade the
performance of GNNs. Despite the initial success of node injection attacks, we
find that the injected nodes by existing methods are easy to be distinguished
from the original normal nodes by defense methods and limiting their attack
performance in practice. To solve the above issues, we devote to camouflage
node injection attack, i.e., camouflaging injected malicious nodes
(structure/attributes) as the normal ones that appear legitimate/imperceptible
to defense methods. The non-Euclidean nature of graph data and the lack of
human prior brings great challenges to the formalization, implementation, and
evaluation of camouflage on graphs. In this paper, we first propose and
formulate the camouflage of injected nodes from both the fidelity and diversity
of the ego networks centered around injected nodes. Then, we design an
adversarial CAmouflage framework for Node injection Attack, namely CANA, to
improve the camouflage while ensuring the attack performance. Several novel
indicators for graph camouflage are further designed for a comprehensive
evaluation. Experimental results demonstrate that when equipping existing node
injection attack methods with our proposed CANA framework, the attack
performance against defense methods as well as node camouflage is significantly
improved
A semantic backdoor attack against Graph Convolutional Networks
Graph convolutional networks (GCNs) have been very effective in addressing
the issue of various graph-structured related tasks, such as node
classification and graph classification. However, recent research has shown
that GCNs are vulnerable to a new type of threat called a backdoor attack,
where the adversary can inject a hidden backdoor into GCNs so that the attacked
model performs well on benign samples, but its prediction will be maliciously
changed to the attacker-specified target label if the hidden backdoor is
activated by the attacker-defined trigger. In this paper, we investigate
whether such semantic backdoor attacks are possible for GCNs and propose a
semantic backdoor attack against GCNs (SBAG) under the context of graph
classification to reveal the existence of this security vulnerability in GCNs.
SBAG uses a certain type of node in the samples as a backdoor trigger and
injects a hidden backdoor into GCN models by poisoning training data. The
backdoor will be activated, and the GCN models will give malicious
classification results specified by the attacker even on unmodified samples as
long as the samples contain enough trigger nodes. We evaluate SBAG on four
graph datasets. The experimental results indicate that SBAG can achieve attack
success rates of approximately 99.9% and over 82% for two kinds of attack
samples, respectively, with poisoning rates of less than 5%
SAILOR: Structural Augmentation Based Tail Node Representation Learning
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in
representation learning for graphs recently. However, the effectiveness of
GNNs, which capitalize on the key operation of message propagation, highly
depends on the quality of the topology structure. Most of the graphs in
real-world scenarios follow a long-tailed distribution on their node degrees,
that is, a vast majority of the nodes in the graph are tail nodes with only a
few connected edges. GNNs produce inferior node representations for tail nodes
since they lack structural information. In the pursuit of promoting the
expressiveness of GNNs for tail nodes, we explore how the deficiency of
structural information deteriorates the performance of tail nodes and propose a
general Structural Augmentation based taIL nOde Representation learning
framework, dubbed as SAILOR, which can jointly learn to augment the graph
structure and extract more informative representations for tail nodes.
Extensive experiments on public benchmark datasets demonstrate that SAILOR can
significantly improve the tail node representations and outperform the
state-of-the-art baselines.Comment: Accepted by CIKM 2023; Code is available at
https://github.com/Jie-Re/SAILO
Projective Ranking-based GNN Evasion Attacks
Graph neural networks (GNNs) offer promising learning methods for
graph-related tasks. However, GNNs are at risk of adversarial attacks. Two
primary limitations of the current evasion attack methods are highlighted: (1)
The current GradArgmax ignores the "long-term" benefit of the perturbation. It
is faced with zero-gradient and invalid benefit estimates in certain
situations. (2) In the reinforcement learning-based attack methods, the learned
attack strategies might not be transferable when the attack budget changes. To
this end, we first formulate the perturbation space and propose an evaluation
framework and the projective ranking method. We aim to learn a powerful attack
strategy then adapt it as little as possible to generate adversarial samples
under dynamic budget settings. In our method, based on mutual information, we
rank and assess the attack benefits of each perturbation for an effective
attack strategy. By projecting the strategy, our method dramatically minimizes
the cost of learning a new attack strategy when the attack budget changes. In
the comparative assessment with GradArgmax and RL-S2V, the results show our
method owns high attack performance and effective transferability. The
visualization of our method also reveals various attack patterns in the
generation of adversarial samples.Comment: Accepted by IEEE Transactions on Knowledge and Data Engineerin