29 research outputs found

    Learning Graphons via Structured Gromov-Wasserstein Barycenters

    Full text link
    We propose a novel and principled method to learn a nonparametric graph model called graphon, which is defined in an infinite-dimensional space and represents arbitrary-size graphs. Based on the weak regularity lemma from the theory of graphons, we leverage a step function to approximate a graphon. We show that the cut distance of graphons can be relaxed to the Gromov-Wasserstein distance of their step functions. Accordingly, given a set of graphs generated by an underlying graphon, we learn the corresponding step function as the Gromov-Wasserstein barycenter of the given graphs. Furthermore, we develop several enhancements and extensions of the basic algorithm, e.g.e.g., the smoothed Gromov-Wasserstein barycenter for guaranteeing the continuity of the learned graphons and the mixed Gromov-Wasserstein barycenters for learning multiple structured graphons. The proposed approach overcomes drawbacks of prior state-of-the-art methods, and outperforms them on both synthetic and real-world data. The code is available at https://github.com/HongtengXu/SGWB-Graphon

    Optimal change point detection and localization in sparse dynamic networks

    Get PDF
    We study the problem of change point localization in dynamic networks models. We assume that we observe a sequence of independent adjacency matrices of the same size, each corresponding to a realization of an unknown inhomogeneous Bernoulli model. The underlying distribution of the adjacency matrices are piecewise constant, and may change over a subset of the time points, called change points. We are concerned with recovering the unknown number and positions of the change points. In our model setting, we allow for all the model parameters to change with the total number of time points, including the network size, the minimal spacing between consecutive change points, the magnitude of the smallest change and the degree of sparsity of the networks. We first identify a region of impossibility in the space of the model parameters such that no change point estimator is provably consistent if the data are generated according to parameters falling in that region. We propose a computationally-simple algorithm for network change point localization, called network binary segmentation, that relies on weighted averages of the adjacency matrices. We show that network binary segmentation is consistent over a range of the model parameters that nearly cover the complement of the impossibility region, thus demonstrating the existence of a phase transition for the problem at hand. Next, we devise a more sophisticated algorithm based on singular value thresholding, called local refinement, that delivers more accurate estimates of the change point locations. Under appropriate conditions, local refinement guarantees a minimax optimal rate for network change point localization while remaining computationally feasible

    Matchability of heterogeneous networks pairs

    Full text link
    We consider the problem of graph matchability in non-identically distributed networks. In a general class of edge-independent networks, we demonstrate that graph matchability is almost surely lost when matching the networks directly, and is almost perfectly recovered when first centering the networks using Universal Singular Value Thresholding before matching. These theoretical results are then demonstrated in both real and synthetic simulation settings. We also recover analogous core-matchability results in a very general core-junk network model, wherein some vertices do not correspond between the graph pair.First author draf

    Consistency of Generalized Random Dot Product Graph with Covariates

    Get PDF
    In this work we present Generalized Random Dot Product graph with Covari- ates model for network data with observed binary covariates. We introduce a spectral estimator for parameters of the model provided that the existence of edges in the graph are independently Bernoulli distributed and the la- tent positions of vertices are independent variables with some distribution F. Theoretically, we prove that the estimator results are asymptotically equal to the true parameters up to some orthogonal transformations. Empirically, we utilize the Procrustes Procedures to find the exact orthogonal transforma- tions. We investigate the algorithm to recover parameters for multiple binary covariates. We outline necessary related work and potential future work
    corecore