32,440 research outputs found

    NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning

    Full text link
    As more deep learning models are being applied in real-world applications, there is a growing need for modeling and learning the representations of neural networks themselves. An efficient representation can be used to predict target attributes of networks without the need for actual training and deployment procedures, facilitating efficient network deployment and design. Recently, inspired by the success of Transformer, some Transformer-based representation learning frameworks have been proposed and achieved promising performance in handling cell-structured models. However, graph neural network (GNN) based approaches still dominate the field of learning representation for the entire network. In this paper, we revisit Transformer and compare it with GNN to analyse their different architecture characteristics. We then propose a modified Transformer-based universal neural network representation learning model NAR-Former V2. It can learn efficient representations from both cell-structured networks and entire networks. Specifically, we first take the network as a graph and design a straightforward tokenizer to encode the network into a sequence. Then, we incorporate the inductive representation learning capability of GNN into Transformer, enabling Transformer to generalize better when encountering unseen architecture. Additionally, we introduce a series of simple yet effective modifications to enhance the ability of the Transformer in learning representation from graph structures. Our proposed method surpasses the GNN-based method NNLP by a significant margin in latency estimation on the NNLQP dataset. Furthermore, regarding accuracy prediction on the NASBench101 and NASBench201 datasets, our method achieves highly comparable performance to other state-of-the-art methods.Comment: 9 pages, 2 figures, 6 tables. Code is available at https://github.com/yuny220/NAR-Former-V

    Learning neural algorithms with graph structures

    Get PDF
    Graph structures, like syntax trees, social networks, and programs, are ubiquitous in many real world applications including knowledge graph inference, chemistry and social network analysis. Over the past several decades, many expert-designed algorithms on graphs have been proposed with nice theoretical properties. However most of them are not data-driven, and will not benefit from the growing scale of available data. Recent advances in deep learning have shown strong empirical performances for images, texts and signals, typically with little domain knowledge. However the combinatorial and discrete nature of the graph data makes it non-trivial to apply neural networks in this domain. Based on the pros and cons of these two, this thesis will discuss several aspects on how to build a tight connection between neural networks and the classical algorithms for graphs. Specifically: - Algorithm inspired deep graph learning: The existing algorithms provide an inspiration of deep architecture design, for both the discriminative learning and generative modeling of graphs. Regarding the discriminative representation learning, we show how the graphical model inference algorithms can inspire the design of graph neural networks for chemistry and bioinformatics applications, and how to scale it up with the idea borrowed from steady states algorithms like PageRank; for generative modeling, we build an HSMM inspired neural segmental generative modeling for signal sequences; and for a class of graphs, we leverage the idea of attribute grammar for syntax trees to help regulate the deep networks. - Deep learning enhanced graph algorithms: the algorithm framework has procedures that can be enhanced by learnable deep network components. We demonstrate by learning the heuristic function in greedy algorithms with reinforcement learning for combinatorial optimization problems over graphs, such as vertex cover and max cut, and optimal touring problem for real world applications like fuzzing. - Towards Inductive reasoning with graph structures: As the algorithm structure generally provides a good inductive bias for the problem, we take an initial step towards inductive reasoning for such structure, where we make attempts to reason about the loop invariant for program verification and the reaction templates for retrosynthesis structured prediction.Ph.D

    Adversarial Attack and Defense on Graph Data: A Survey

    Full text link
    Deep neural networks (DNNs) have been widely applied to various applications including image classification, text generation, audio recognition, and graph data analysis. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. Though there are several works studying adversarial attack and defense strategies on domains such as images and natural language processing, it is still difficult to directly transfer the learned knowledge to graph structure data due to its representation challenges. Given the importance of graph analysis, an increasing number of works start to analyze the robustness of machine learning models on graph data. Nevertheless, current studies considering adversarial behaviors on graph data usually focus on specific types of attacks with certain assumptions. In addition, each work proposes its own mathematical formulation which makes the comparison among different methods difficult. Therefore, in this paper, we aim to survey existing adversarial learning strategies on graph data and first provide a unified formulation for adversarial learning on graph data which covers most adversarial learning studies on graph. Moreover, we also compare different attacks and defenses on graph data and discuss their corresponding contributions and limitations. In this work, we systemically organize the considered works based on the features of each topic. This survey not only serves as a reference for the research community, but also brings a clear image researchers outside this research domain. Besides, we also create an online resource and keep updating the relevant papers during the last two years. More details of the comparisons of various studies based on this survey are open-sourced at https://github.com/YingtongDou/graph-adversarial-learning-literature.Comment: In submission to Journal. For more open-source and up-to-date information, please check our Github repository: https://github.com/YingtongDou/graph-adversarial-learning-literatur
    • …
    corecore