87 research outputs found

    Testing goodness-of-fit of random graph models

    Get PDF
    Random graphs are matrices with independent 0, 1 elements with probabilities determined by a small number of parameters. One of the oldest model is the Rasch model where the odds are ratios of positive numbers scaling the rows and columns. Later Persi Diaconis with his coworkers rediscovered the model for symmetric matrices and called the model beta. Here we give goodnes-of-fit tests for the model and extend the model to a version of the block model introduced by Holland, Laskey, and Leinhard

    A modularity based spectral method for simultaneous community and anti-community detection

    Full text link
    In a graph or complex network, communities and anti-communities are node sets whose modularity attains extremely large values, positive and negative, respectively. We consider the simultaneous detection of communities and anti-communities, by looking at spectral methods based on various matrix-based definitions of the modularity of a vertex set. Invariant subspaces associated to extreme eigenvalues of these matrices provide indications on the presence of both kinds of modular structure in the network. The localization of the relevant invariant subspaces can be estimated by looking at particular matrix angles based on Frobenius inner products

    A Generic Sample Splitting Approach for Refined Community Recovery in Stochastic Block Models

    Full text link
    We propose and analyze a generic method for community recovery in stochastic block models and degree corrected block models. This approach can exactly recover the hidden communities with high probability when the expected node degrees are of order logn\log n or higher. Starting from a roughly correct community partition given by some conventional community recovery algorithm, this method refines the partition in a cross clustering step. Our results simplify and extend some of the previous work on exact community recovery, discovering the key role played by sample splitting. The proposed method is simple and can be implemented with many practical community recovery algorithms.Comment: 19 page

    Asymptotic normality of maximum likelihood and its variational approximation for stochastic blockmodels

    Full text link
    Variational methods for parameter estimation are an active research area, potentially offering computationally tractable heuristics with theoretical performance bounds. We build on recent work that applies such methods to network data, and establish asymptotic normality rates for parameter estimates of stochastic blockmodel data, by either maximum likelihood or variational estimation. The result also applies to various sub-models of the stochastic blockmodel found in the literature.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1124 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On Positional and Structural Node Features for Graph Neural Networks on Non-attributed Graphs

    Full text link
    Graph neural networks (GNNs) have been widely used in various graph-related problems such as node classification and graph classification, where the superior performance is mainly established when natural node features are available. However, it is not well understood how GNNs work without natural node features, especially regarding the various ways to construct artificial ones. In this paper, we point out the two types of artificial node features,i.e., positional and structural node features, and provide insights on why each of them is more appropriate for certain tasks,i.e., positional node classification, structural node classification, and graph classification. Extensive experimental results on 10 benchmark datasets validate our insights, thus leading to a practical guideline on the choices between different artificial node features for GNNs on non-attributed graphs. The code is available at https://github.com/zjzijielu/gnn-exp/.Comment: This paper has been accepted to the Sixth International Workshop on Deep Learning on Graphs (DLG-KDD'21) (co-located with KDD'21
    corecore