3 research outputs found

    Node-feature convolution for graph convolutional networks

    Get PDF
    Graph convolutional network (GCN) is an effective neural network model for graph representation learning. However, standard GCN suffers from three main limitations: (1) most real-world graphs have no regular connectivity and node degrees can range from one to hundreds or thousands, (2) neighboring nodes are aggregated with fixed weights, and (3) node features within a node feature vector are considered equally important. Several extensions have been proposed to tackle the limitations respectively. This paper focuses on tackling all the proposed limitations. Specifically, we propose a new node-feature convolutional (NFC) layer for GCN. The NFC layer first constructs a feature map using features selected and ordered from a fixed number of neighbors. It then performs a convolution operation on this feature map to learn the node representation. In this way, we can learn the usefulness of both individual nodes and individual features from a fixed-size neighborhood. Experiments on three benchmark datasets show that NFC-GCN consistently outperforms state-of-the-art methods in node classification

    LatticeNN -- Deep Learning and Formal Concept Analysis

    Get PDF
    International audienc

    Masked Graph Convolutional Network for Small Sample Classification of Hyperspectral Images

    No full text
    The deep learning method has achieved great success in hyperspectral image classification, but the lack of labeled training samples still restricts the development and application of deep learning methods. In order to deal with the problem of small samples in hyperspectral image classification, a novel small sample classification method based on rotation-invariant uniform local binary pattern (RULBP) features and a graph-based masked autoencoder is proposed in this paper. Firstly, the RULBP features of hyperspectral images are extracted, and then the k-nearest neighbor method is utilized to construct the graph. Furthermore, self-supervised learning is conducted on the constructed graph so that the model can learn to extract features more suitable for small sample classification. Since the self-supervised training mainly adopts the masked autoencoder method, only unlabeled samples are needed to complete the training. After training, only a small number of samples are used to fine-tune the graph convolutional network, so as to complete the classification of all nodes in the graph. A large number of classification experiments on three commonly used hyperspectral image datasets show that the proposed method could achieve higher classification accuracy with fewer labeled samples
    corecore