717 research outputs found

    Creation and Growth of Components in a Random Hypergraph Process

    Full text link
    Denote by an \ell-component a connected bb-uniform hypergraph with kk edges and k(b1)k(b-1) - \ell vertices. We prove that the expected number of creations of \ell-component during a random hypergraph process tends to 1 as \ell and bb tend to \infty with the total number of vertices nn such that =o(nb3)\ell = o(\sqrt[3]{\frac{n}{b}}). Under the same conditions, we also show that the expected number of vertices that ever belong to an \ell-component is approximately 121/3(b1)1/31/3n2/312^{1/3} (b-1)^{1/3} \ell^{1/3} n^{2/3}. As an immediate consequence, it follows that with high probability the largest \ell-component during the process is of size O((b1)1/31/3n2/3)O((b-1)^{1/3} \ell^{1/3} n^{2/3}). Our results give insight about the size of giant components inside the phase transition of random hypergraphs.Comment: R\'{e}sum\'{e} \'{e}tend

    Growing Graphs with Hyperedge Replacement Graph Grammars

    Full text link
    Discovering the underlying structures present in large real world graphs is a fundamental scientific problem. In this paper we show that a graph's clique tree can be used to extract a hyperedge replacement grammar. If we store an ordering from the extraction process, the extracted graph grammar is guaranteed to generate an isomorphic copy of the original graph. Or, a stochastic application of the graph grammar rules can be used to quickly create random graphs. In experiments on large real world networks, we show that random graphs, generated from extracted graph grammars, exhibit a wide range of properties that are very similar to the original graphs. In addition to graph properties like degree or eigenvector centrality, what a graph "looks like" ultimately depends on small details in local graph substructures that are difficult to define at a global level. We show that our generative graph model is able to preserve these local substructures when generating new graphs and performs well on new and difficult tests of model robustness.Comment: 18 pages, 19 figures, accepted to CIKM 2016 in Indianapolis, I

    A note on Pr\"ufer-like coding and counting forests of uniform hypertrees

    Full text link
    This note presents an encoding and a decoding algorithms for a forest of (labelled) rooted uniform hypertrees and hypercycles in linear time, by using as few as n2n - 2 integers in the range [1,n][1,n]. It is a simple extension of the classical Pr\"{u}fer code for (labelled) rooted trees to an encoding for forests of (labelled) rooted uniform hypertrees and hypercycles, which allows to count them up according to their number of vertices, hyperedges and hypertrees. In passing, we also find Cayley's formula for the number of (labelled) rooted trees as well as its generalisation to the number of hypercycles found by Selivanov in the early 70's.Comment: Version 2; 8th International Conference on Computer Science and Information Technologies (CSIT 2011), Erevan : Armenia (2011

    Mixing times for random k-cycles and coalescence-fragmentation chains

    Full text link
    Let Sn\mathcal{S}_n be the permutation group on nn elements, and consider a random walk on Sn\mathcal{S}_n whose step distribution is uniform on kk-cycles. We prove a well-known conjecture that the mixing time of this process is (1/k)nlogn(1/k)n\log n, with threshold of width linear in nn. Our proofs are elementary and purely probabilistic, and do not appeal to the representation theory of Sn\mathcal{S}_n.Comment: Published in at http://dx.doi.org/10.1214/10-AOP634 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Towards hypergraph cognitive networks as feature-rich models of knowledge

    Full text link
    Semantic networks provide a useful tool to understand how related concepts are retrieved from memory. However, most current network approaches use pairwise links to represent memory recall patterns. Pairwise connections neglect higher-order associations, i.e. relationships between more than two concepts at a time. These higher-order interactions might covariate with (and thus contain information about) how similar concepts are along psycholinguistic dimensions like arousal, valence, familiarity, gender and others. We overcome these limits by introducing feature-rich cognitive hypergraphs as quantitative models of human memory where: (i) concepts recalled together can all engage in hyperlinks involving also more than two concepts at once (cognitive hypergraph aspect), and (ii) each concept is endowed with a vector of psycholinguistic features (feature-rich aspect). We build hypergraphs from word association data and use evaluation methods from machine learning features to predict concept concreteness. Since concepts with similar concreteness tend to cluster together in human memory, we expect to be able to leverage this structure. Using word association data from the Small World of Words dataset, we compared a pairwise network and a hypergraph with N=3586 concepts/nodes. Interpretable artificial intelligence models trained on (1) psycholinguistic features only, (2) pairwise-based feature aggregations, and on (3) hypergraph-based aggregations show significant differences between pairwise and hypergraph links. Specifically, our results show that higher-order and feature-rich hypergraph models contain richer information than pairwise networks leading to improved prediction of word concreteness. The relation with previous studies about conceptual clustering and compartmentalisation in associative knowledge and human memory are discussed
    corecore