1,797 research outputs found
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Graph Neural Networks (GNNs) have made rapid developments in the recent
years. Due to their great ability in modeling graph-structured data, GNNs are
vastly used in various applications, including high-stakes scenarios such as
financial analysis, traffic predictions, and drug discovery. Despite their
great potential in benefiting humans in the real world, recent study shows that
GNNs can leak private information, are vulnerable to adversarial attacks, can
inherit and magnify societal bias from training data and lack interpretability,
which have risk of causing unintentional harm to the users and society. For
example, existing works demonstrate that attackers can fool the GNNs to give
the outcome they desire with unnoticeable perturbation on training graph. GNNs
trained on social networks may embed the discrimination in their decision
process, strengthening the undesirable societal bias. Consequently, trustworthy
GNNs in various aspects are emerging to prevent the harm from GNN models and
increase the users' trust in GNNs. In this paper, we give a comprehensive
survey of GNNs in the computational aspects of privacy, robustness, fairness,
and explainability. For each aspect, we give the taxonomy of the related
methods and formulate the general frameworks for the multiple categories of
trustworthy GNNs. We also discuss the future research directions of each aspect
and connections between these aspects to help achieve trustworthiness
Generative Explanations for Graph Neural Network: Methods and Evaluations
Graph Neural Networks (GNNs) achieve state-of-the-art performance in various
graph-related tasks. However, the black-box nature often limits their
interpretability and trustworthiness. Numerous explainability methods have been
proposed to uncover the decision-making logic of GNNs, by generating underlying
explanatory substructures. In this paper, we conduct a comprehensive review of
the existing explanation methods for GNNs from the perspective of graph
generation. Specifically, we propose a unified optimization objective for
generative explanation methods, comprising two sub-objectives: Attribution and
Information constraints. We further demonstrate their specific manifestations
in various generative model architectures and different explanation scenarios.
With the unified objective of the explanation problem, we reveal the shared
characteristics and distinctions among current methods, laying the foundation
for future methodological advancements. Empirical results demonstrate the
advantages and limitations of different explainability approaches in terms of
explanation performance, efficiency, and generalizability
- …