717 research outputs found
Creation and Growth of Components in a Random Hypergraph Process
Denote by an -component a connected -uniform hypergraph with
edges and vertices. We prove that the expected number of
creations of -component during a random hypergraph process tends to 1 as
and tend to with the total number of vertices such that
. Under the same conditions, we also show that
the expected number of vertices that ever belong to an -component is
approximately . As an immediate
consequence, it follows that with high probability the largest -component
during the process is of size . Our results
give insight about the size of giant components inside the phase transition of
random hypergraphs.Comment: R\'{e}sum\'{e} \'{e}tend
Growing Graphs with Hyperedge Replacement Graph Grammars
Discovering the underlying structures present in large real world graphs is a
fundamental scientific problem. In this paper we show that a graph's clique
tree can be used to extract a hyperedge replacement grammar. If we store an
ordering from the extraction process, the extracted graph grammar is guaranteed
to generate an isomorphic copy of the original graph. Or, a stochastic
application of the graph grammar rules can be used to quickly create random
graphs. In experiments on large real world networks, we show that random
graphs, generated from extracted graph grammars, exhibit a wide range of
properties that are very similar to the original graphs. In addition to graph
properties like degree or eigenvector centrality, what a graph "looks like"
ultimately depends on small details in local graph substructures that are
difficult to define at a global level. We show that our generative graph model
is able to preserve these local substructures when generating new graphs and
performs well on new and difficult tests of model robustness.Comment: 18 pages, 19 figures, accepted to CIKM 2016 in Indianapolis, I
A note on Pr\"ufer-like coding and counting forests of uniform hypertrees
This note presents an encoding and a decoding algorithms for a forest of
(labelled) rooted uniform hypertrees and hypercycles in linear time, by using
as few as integers in the range . It is a simple extension of
the classical Pr\"{u}fer code for (labelled) rooted trees to an encoding for
forests of (labelled) rooted uniform hypertrees and hypercycles, which allows
to count them up according to their number of vertices, hyperedges and
hypertrees. In passing, we also find Cayley's formula for the number of
(labelled) rooted trees as well as its generalisation to the number of
hypercycles found by Selivanov in the early 70's.Comment: Version 2; 8th International Conference on Computer Science and
Information Technologies (CSIT 2011), Erevan : Armenia (2011
Mixing times for random k-cycles and coalescence-fragmentation chains
Let be the permutation group on elements, and consider a
random walk on whose step distribution is uniform on
-cycles. We prove a well-known conjecture that the mixing time of this
process is , with threshold of width linear in . Our proofs
are elementary and purely probabilistic, and do not appeal to the
representation theory of .Comment: Published in at http://dx.doi.org/10.1214/10-AOP634 the Annals of
Probability (http://www.imstat.org/aop/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Towards hypergraph cognitive networks as feature-rich models of knowledge
Semantic networks provide a useful tool to understand how related concepts
are retrieved from memory. However, most current network approaches use
pairwise links to represent memory recall patterns. Pairwise connections
neglect higher-order associations, i.e. relationships between more than two
concepts at a time. These higher-order interactions might covariate with (and
thus contain information about) how similar concepts are along psycholinguistic
dimensions like arousal, valence, familiarity, gender and others. We overcome
these limits by introducing feature-rich cognitive hypergraphs as quantitative
models of human memory where: (i) concepts recalled together can all engage in
hyperlinks involving also more than two concepts at once (cognitive hypergraph
aspect), and (ii) each concept is endowed with a vector of psycholinguistic
features (feature-rich aspect). We build hypergraphs from word association data
and use evaluation methods from machine learning features to predict concept
concreteness. Since concepts with similar concreteness tend to cluster together
in human memory, we expect to be able to leverage this structure. Using word
association data from the Small World of Words dataset, we compared a pairwise
network and a hypergraph with N=3586 concepts/nodes. Interpretable artificial
intelligence models trained on (1) psycholinguistic features only, (2)
pairwise-based feature aggregations, and on (3) hypergraph-based aggregations
show significant differences between pairwise and hypergraph links.
Specifically, our results show that higher-order and feature-rich hypergraph
models contain richer information than pairwise networks leading to improved
prediction of word concreteness. The relation with previous studies about
conceptual clustering and compartmentalisation in associative knowledge and
human memory are discussed
- …