143,468 research outputs found
NASGEM: Neural Architecture Search via Graph Embedding Method
Neural Architecture Search (NAS) automates and prospers the design of neural
networks. Estimator-based NAS has been proposed recently to model the
relationship between architectures and their performance to enable scalable and
flexible search. However, existing estimator-based methods encode the
architecture into a latent space without considering graph similarity. Ignoring
graph similarity in node-based search space may induce a large inconsistency
between similar graphs and their distance in the continuous encoding space,
leading to inaccurate encoding representation and/or reduced representation
capacity that can yield sub-optimal search results. To preserve graph
correlation information in encoding, we propose NASGEM which stands for Neural
Architecture Search via Graph Embedding Method. NASGEM is driven by a novel
graph embedding method equipped with similarity measures to capture the graph
topology information. By precisely estimating the graph distance and using an
auxiliary Weisfeiler-Lehman kernel to guide the encoding, NASGEM can utilize
additional structural information to get more accurate graph representation to
improve the search efficiency. GEMNet, a set of networks discovered by NASGEM,
consistently outperforms networks crafted by existing search methods in
classification tasks, i.e., with 0.4%-3.6% higher accuracy while having 11%-
21% fewer Multiply-Accumulates. We further transfer GEMNet for COCO object
detection. In both one-stage and twostage detectors, our GEMNet surpasses its
manually-crafted and automatically-searched counterparts
Graph Sparsifications using Neural Network Assisted Monte Carlo Tree Search
Graph neural networks have been successful for machine learning, as well as
for combinatorial and graph problems such as the Subgraph Isomorphism Problem
and the Traveling Salesman Problem. We describe an approach for computing graph
sparsifiers by combining a graph neural network and Monte Carlo Tree Search. We
first train a graph neural network that takes as input a partial solution and
proposes a new node to be added as output. This neural network is then used in
a Monte Carlo search to compute a sparsifier. The proposed method consistently
outperforms several standard approximation algorithms on different types of
graphs and often finds the optimal solution.Comment: arXiv admin note: substantial text overlap with arXiv:2305.0053
Adversarially Robust Neural Architecture Search for Graph Neural Networks
Graph Neural Networks (GNNs) obtain tremendous success in modeling relational
data. Still, they are prone to adversarial attacks, which are massive threats
to applying GNNs to risk-sensitive domains. Existing defensive methods neither
guarantee performance facing new data/tasks or adversarial attacks nor provide
insights to understand GNN robustness from an architectural perspective. Neural
Architecture Search (NAS) has the potential to solve this problem by automating
GNN architecture designs. Nevertheless, current graph NAS approaches lack
robust design and are vulnerable to adversarial attacks. To tackle these
challenges, we propose a novel Robust Neural Architecture search framework for
GNNs (G-RNA). Specifically, we design a robust search space for the
message-passing mechanism by adding graph structure mask operations into the
search space, which comprises various defensive operation candidates and allows
us to search for defensive GNNs. Furthermore, we define a robustness metric to
guide the search procedure, which helps to filter robust architectures. In this
way, G-RNA helps understand GNN robustness from an architectural perspective
and effectively searches for optimal adversarial robust GNNs. Extensive
experimental results on benchmark datasets show that G-RNA significantly
outperforms manually designed robust GNNs and vanilla graph NAS baselines by
12.1% to 23.4% under adversarial attacks.Comment: Accepted as a conference paper at CVPR 202
Exploring Robustness of Neural Networks through Graph Measures
Motivated by graph theory, artificial neural networks (ANNs) are
traditionally structured as layers of neurons (nodes), which learn useful
information by the passage of data through interconnections (edges). In the
machine learning realm, graph structures (i.e., neurons and connections) of
ANNs have recently been explored using various graph-theoretic measures linked
to their predictive performance. On the other hand, in network science
(NetSci), certain graph measures including entropy and curvature are known to
provide insight into the robustness and fragility of real-world networks. In
this work, we use these graph measures to explore the robustness of various
ANNs to adversarial attacks. To this end, we (1) explore the design space of
inter-layer and intra-layers connectivity regimes of ANNs in the graph domain
and record their predictive performance after training under different types of
adversarial attacks, (2) use graph representations for both inter-layer and
intra-layers connectivity regimes to calculate various graph-theoretic
measures, including curvature and entropy, and (3) analyze the relationship
between these graph measures and the adversarial performance of ANNs. We show
that curvature and entropy, while operating in the graph domain, can quantify
the robustness of ANNs without having to train these ANNs. Our results suggest
that the real-world networks, including brain networks, financial networks, and
social networks may provide important clues to the neural architecture search
for robust ANNs. We propose a search strategy that efficiently finds robust
ANNs amongst a set of well-performing ANNs without having a need to train all
of these ANNs.Comment: 18 pages, 15 figure
- …