5 research outputs found

    Discrete embedding for latent networks

    Full text link
    Discrete network embedding emerged recently as a new direction of network representation learning. Compared with traditional network embedding models, discrete network embedding aims to compress model size and accelerate model inference by learning a set of short binary codes for network vertices. However, existing discrete network embedding methods usually assume that the network structures (e.g., edge weights) are readily available. In real-world scenarios such as social networks, sometimes it is impossible to collect explicit network structure information and it usually needs to be inferred from implicit data such as information cascades in the networks. To address this issue, we present an end-to-end discrete network embedding model for latent networks (DELN) that can learn binary representations from underlying information cascades. The essential idea is to infer a latent Weisfeiler-Lehman proximity matrix that captures node dependence based on information cascades and then to factorize the latent Weisfiler-Lehman matrix under the binary node representation constraint. Since the learning problem is a mixed integer optimization problem, an efficient maximal likelihood estimation based cyclic coordinate descent (MLE-CCD) algorithm is used as the solution. Experiments on real-world datasets show that the proposed model outperforms the state-of-the-art network embedding methods

    Node-feature convolution for graph convolutional networks

    Get PDF
    Graph convolutional network (GCN) is an effective neural network model for graph representation learning. However, standard GCN suffers from three main limitations: (1) most real-world graphs have no regular connectivity and node degrees can range from one to hundreds or thousands, (2) neighboring nodes are aggregated with fixed weights, and (3) node features within a node feature vector are considered equally important. Several extensions have been proposed to tackle the limitations respectively. This paper focuses on tackling all the proposed limitations. Specifically, we propose a new node-feature convolutional (NFC) layer for GCN. The NFC layer first constructs a feature map using features selected and ordered from a fixed number of neighbors. It then performs a convolution operation on this feature map to learn the node representation. In this way, we can learn the usefulness of both individual nodes and individual features from a fixed-size neighborhood. Experiments on three benchmark datasets show that NFC-GCN consistently outperforms state-of-the-art methods in node classification

    Diffusion network embedding

    Full text link
    © 2018 Elsevier Ltd In network embedding, random walks play a fundamental role in preserving network structures. However, random walk methods have two limitations. First, they are unstable when either the sampling frequency or the number of node sequences changes. Second, in highly biased networks, random walks are likely to bias to high-degree nodes and neglect the global structure information. To solve the limitations, we present in this paper a network diffusion embedding method. To solve the first limitation, our method uses a diffusion driven process to capture both depth and breadth information in networks. Temporal information is also included into node sequences to strengthen information preserving. To solve the second limitation, our method uses the network inference method based on information diffusion cascades to capture the global network information. Experiments show that the new proposed method is more robust to highly unbalanced networks and well performed when sampling under each node is rare
    corecore