2 research outputs found

    Are Defenses for Graph Neural Networks Robust?

    Get PDF
    A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw – virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering – most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness

    Learning Effective Embeddings for Dynamic Graphs and Quantifying Graph Embedding Interpretability

    Get PDF
    Graph representation learning has been a very active research area in recent years. The goal of graph representation learning is to generate representation vectors that accurately capture the structure and features of large graphs. This is especially important because the quality of the graph representation vectors will affect the performance of these vectors in downstream tasks such as node classification and link prediction. Many techniques have been proposed for generating effective graph representation vectors. These methods can be applied to both static and dynamic graphs. A static graph is a single fixed graph, while a dynamic graph evolves over time, and its nodes and edges can be added or deleted from the graph. We surveyed the graph embedding methods for both static and dynamic graphs. The majority of the existing graph embedding methods are developed for static graphs. Therefore, since most real-world graphs are dynamic, developing novel graph embedding methods suitable for evolving graphs is essential. This dissertation proposes three dynamic graph embedding models. In previous dynamic methods, the inputs were mainly adjacency matrices of graphs which are not memory efficient and may not capture the neighbourhood structure in graphs effectively. Therefore, we developed Dynnode2vec based on random walks using the static model Node2vec. Dynnode2vec generates node embeddings in each snapshot by initializing the current model with previous embedding vectors and training the model using a set of random walks obtained for nodes in the snapshot. Our second model, LSTM-Node2vec, is also based on random walks. This method leverages the LSTM model to capture the long-range dependencies between nodes in combination with Node2vec to generate node embeddings. Finally, inspired by the importance of substructures in the graphs, our third model TGR-Clique generates node embeddings by considering the effects of neighbours of a node in the maximal cliques containing the node. Experiments on real-world datasets demonstrate the effectiveness of our proposed methods in comparison to the state-of-the-art models. In addition, motivated by the lack of proper measures for quantifying and comparing graph embeddings interpretability, we proposed two interpretability measures for graph embeddings using the centrality properties of graphs
    corecore