5 research outputs found

    Moment-Based Spectral Analysis of Random Graphs with Given Expected Degrees

    Get PDF
    In this paper, we analyze the limiting spectral distribution of the adjacency matrix of a random graph ensemble, proposed by Chung and Lu, in which a given expected degree sequence wnT=(w1(n),,wn(n))\overline{w}_n^{^{T}} = (w^{(n)}_1,\ldots,w^{(n)}_n) is prescribed on the ensemble. Let ai,j=1\mathbf{a}_{i,j} =1 if there is an edge between the nodes {i,j}\{i,j\} and zero otherwise, and consider the normalized random adjacency matrix of the graph ensemble: An\mathbf{A}_n == [ai,j/n]i,j=1n [\mathbf{a}_{i,j}/\sqrt{n}]_{i,j=1}^{n}. The empirical spectral distribution of An\mathbf{A}_n denoted by Fn()\mathbf{F}_n(\mathord{\cdot}) is the empirical measure putting a mass 1/n1/n at each of the nn real eigenvalues of the symmetric matrix An\mathbf{A}_n. Under some technical conditions on the expected degree sequence, we show that with probability one, Fn()\mathbf{F}_n(\mathord{\cdot}) converges weakly to a deterministic distribution F()F(\mathord{\cdot}). Furthermore, we fully characterize this distribution by providing explicit expressions for the moments of F()F(\mathord{\cdot}). We apply our results to well-known degree distributions, such as power-law and exponential. The asymptotic expressions of the spectral moments in each case provide significant insights about the bulk behavior of the eigenvalue spectrum

    Context Vectors are Reflections of Word Vectors in Half the Dimensions

    Get PDF
    This paper takes a step towards theoretical analysis of the relationship between word embeddings and context embeddings in models such as word2vec. We start from basic probabilistic assumptions on the nature of word vectors, context vectors, and text generation. These assumptions are well supported either empirically or theoretically by the existing literature. Next, we show that under these assumptions the widely-used word-word PMI matrix is approximately a random symmetric Gaussian ensemble. This, in turn, implies that context vectors are reflections of word vectors in approximately half the dimensions. As a direct application of our result, we suggest a theoretically grounded way of tying weights in the SGNS model

    Thermodynamics of network model fitting with spectral entropies

    Full text link
    An information theoretic approach inspired by quantum statistical mechanics was recently proposed as a means to optimize network models and to assess their likelihood against synthetic and real-world networks. Importantly, this method does not rely on specific topological features or network descriptors, but leverages entropy-based measures of network distance. Entertaining the analogy with thermodynamics, we provide a physical interpretation of model hyperparameters and propose analytical procedures for their estimate. These results enable the practical application of this novel and powerful framework to network model inference. We demonstrate this method in synthetic networks endowed with a modular structure, and in real-world brain connectivity networks.Comment: 11 pages, 3 figure

    Context Vectors Are Reflections of Word Vectors in Half the Dimensions

    Get PDF
    https://arxiv.org/pdf/1902.09859.pdfThis paper takes a step towards theoretical analysis of the relationship between word embeddings and context embeddings in models such as word2vec. We start from basic probabilistic assumptions on the nature of word vectors, context vectors, and text generation. These assumptions are supported either empirically or theoretically by the existing literature. Next, we show that under these assumptions the widely-used word-word PMI matrix is approximately a random symmetric Gaussian ensemble. This, in turn, implies that context vectors are reflections of word vectors in approximately half the dimensions. As a direct application of our result, we suggest a theoretically grounded way of tying weights in the SGNS model
    corecore