4 research outputs found
Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks
Hypergraph Neural Networks (HGNNs) have been successfully applied in various
hypergraph-related tasks due to their excellent higher-order representation
capabilities. Recent works have shown that deep learning models are vulnerable
to adversarial attacks. Most studies on graph adversarial attacks have focused
on Graph Neural Networks (GNNs), and the study of adversarial attacks on HGNNs
remains largely unexplored. In this paper, we try to reduce this gap. We design
a new HGNNs attack model for the untargeted attack, namely MGHGA, which focuses
on modifying node features. We consider the process of HGNNs training and use a
surrogate model to implement the attack before hypergraph modeling.
Specifically, MGHGA consists of two parts: feature selection and feature
modification. We use a momentum gradient mechanism to choose the attack node
features in the feature selection module. In the feature modification module,
we use two feature generation approaches (direct modification and sign
gradient) to enable MGHGA to be employed on discrete and continuous datasets.
We conduct extensive experiments on five benchmark datasets to validate the
attack performance of MGHGA in the node and the visual object classification
tasks. The results show that MGHGA improves performance by an average of 2%
compared to the than the baselines