150 research outputs found

    A Survey on Surrogate-assisted Efficient Neural Architecture Search

    Full text link
    Neural architecture search (NAS) has become increasingly popular in the deep learning community recently, mainly because it can provide an opportunity to allow interested users without rich expertise to benefit from the success of deep neural networks (DNNs). However, NAS is still laborious and time-consuming because a large number of performance estimations are required during the search process of NAS, and training DNNs is computationally intensive. To solve the major limitation of NAS, improving the efficiency of NAS is essential in the design of NAS. This paper begins with a brief introduction to the general framework of NAS. Then, the methods for evaluating network candidates under the proxy metrics are systematically discussed. This is followed by a description of surrogate-assisted NAS, which is divided into three different categories, namely Bayesian optimization for NAS, surrogate-assisted evolutionary algorithms for NAS, and MOP for NAS. Finally, remaining challenges and open research questions are discussed, and promising research topics are suggested in this emerging field.Comment: 18 pages, 7 figure

    Augment with Care: Enhancing Graph Contrastive Learning with Selective Spectrum Perturbation

    Full text link
    In recent years, Graph Contrastive Learning (GCL) has shown remarkable effectiveness in learning representations on graphs. As a component of GCL, good augmentation views are supposed to be invariant to the important information while discarding the unimportant part. Existing augmentation views with perturbed graph structures are usually based on random topology corruption in the spatial domain; however, from perspectives of the spectral domain, this approach may be ineffective as it fails to pose tailored impacts on the information of different frequencies, thus weakening the agreement between the augmentation views. By a preliminary experiment, we show that the impacts caused by spatial random perturbation are approximately evenly distributed among frequency bands, which may harm the invariance of augmentations required by contrastive learning frameworks. To address this issue, we argue that the perturbation should be selectively posed on the information concerning different frequencies. In this paper, we propose GASSER which poses tailored perturbation on the specific frequencies of graph structures in spectral domain, and the edge perturbation is selectively guided by the spectral hints. As shown by extensive experiments and theoretical analysis, the augmentation views are adaptive and controllable, as well as heuristically fitting the homophily ratios and spectrum of graph structures

    Label-free Node Classification on Graphs with Large Language Models (LLMS)

    Full text link
    In recent years, there have been remarkable advancements in node classification achieved by Graph Neural Networks (GNNs). However, they necessitate abundant high-quality labels to ensure promising performance. In contrast, Large Language Models (LLMs) exhibit impressive zero-shot proficiency on text-attributed graphs. Yet, they face challenges in efficiently processing structural data and suffer from high inference costs. In light of these observations, this work introduces a label-free node classification on graphs with LLMs pipeline, LLM-GNN. It amalgamates the strengths of both GNNs and LLMs while mitigating their limitations. Specifically, LLMs are leveraged to annotate a small portion of nodes and then GNNs are trained on LLMs' annotations to make predictions for the remaining large portion of nodes. The implementation of LLM-GNN faces a unique challenge: how can we actively select nodes for LLMs to annotate and consequently enhance the GNN training? How can we leverage LLMs to obtain annotations of high quality, representativeness, and diversity, thereby enhancing GNN performance with less cost? To tackle this challenge, we develop an annotation quality heuristic and leverage the confidence scores derived from LLMs to advanced node selection. Comprehensive experimental results validate the effectiveness of LLM-GNN. In particular, LLM-GNN can achieve an accuracy of 74.9% on a vast-scale dataset \products with a cost less than 1 dollar.Comment: The code will be available soon via https://github.com/CurryTang/LLMGN
    corecore