7,705 research outputs found
(Quantum) Space-Time as a Statistical Geometry of Lumps in Random Networks
In the following we undertake to describe how macroscopic space-time (or
rather, a microscopic protoform of it) is supposed to emerge as a
superstructure of a web of lumps in a stochastic discrete network structure. As
in preceding work (mentioned below), our analysis is based on the working
philosophy that both physics and the corresponding mathematics have to be
genuinely discrete on the primordial (Planck scale) level. This strategy is
concretely implemented in the form of \tit{cellular networks} and \tit{random
graphs}. One of our main themes is the development of the concept of
\tit{physical (proto)points} or \tit{lumps} as densely entangled subcomplexes
of the network and their respective web, establishing something like
\tit{(proto)causality}. It may perhaps be said that certain parts of our
programme are realisations of some early ideas of Menger and more recent ones
sketched by Smolin a couple of years ago. We briefly indicate how this
\tit{two-story-concept} of \tit{quantum} space-time can be used to encode the
(at least in our view) existing non-local aspects of quantum theory without
violating macroscopic space-time causality.Comment: 35 pages, Latex, under consideration by CQ
On Approximating the Number of -cliques in Sublinear Time
We study the problem of approximating the number of -cliques in a graph
when given query access to the graph.
We consider the standard query model for general graphs via (1) degree
queries, (2) neighbor queries and (3) pair queries. Let denote the number
of vertices in the graph, the number of edges, and the number of
-cliques. We design an algorithm that outputs a
-approximation (with high probability) for , whose
expected query complexity and running time are
O\left(\frac{n}{C_k^{1/k}}+\frac{m^{k/2}}{C_k}\right)\poly(\log
n,1/\varepsilon,k).
Hence, the complexity of the algorithm is sublinear in the size of the graph
for . Furthermore, we prove a lower bound showing that
the query complexity of our algorithm is essentially optimal (up to the
dependence on , and ).
The previous results in this vein are by Feige (SICOMP 06) and by Goldreich
and Ron (RSA 08) for edge counting () and by Eden et al. (FOCS 2015) for
triangle counting (). Our result matches the complexities of these
results.
The previous result by Eden et al. hinges on a certain amortization technique
that works only for triangle counting, and does not generalize for larger
cliques. We obtain a general algorithm that works for any by
designing a procedure that samples each -clique incident to a given set
of vertices with approximately equal probability. The primary difficulty is in
finding cliques incident to purely high-degree vertices, since random sampling
within neighbors has a low success probability. This is achieved by an
algorithm that samples uniform random high degree vertices and a careful
tradeoff between estimating cliques incident purely to high-degree vertices and
those that include a low-degree vertex
Construction of near-optimal vertex clique covering for real-world networks
We propose a method based on combining a constructive and a bounding heuristic to solve the vertex clique covering problem (CCP), where the aim is to partition the vertices of a graph into the smallest number of classes, which induce cliques. Searching for the solution to CCP is highly motivated by analysis of social and other real-world networks, applications in graph mining, as well as by the fact that CCP is one of the classical NP-hard problems. Combining the construction and the bounding heuristic helped us not only to find high-quality clique coverings but also to determine that in the domain of real-world networks, many of the obtained solutions are optimal, while the rest of them are near-optimal. In addition, the method has a polynomial time complexity and shows much promise for its practical use. Experimental results are presented for a fairly representative benchmark of real-world data. Our test graphs include extracts of web-based social networks, including some very large ones, several well-known graphs from network science, as well as coappearance networks of literary works' characters from the DIMACS graph coloring benchmark. We also present results for synthetic pseudorandom graphs structured according to the Erdös-Renyi model and Leighton's model
Epidemics on random intersection graphs
In this paper we consider a model for the spread of a stochastic SIR
(Susceptible Infectious Recovered) epidemic on a network of
individuals described by a random intersection graph. Individuals belong to a
random number of cliques, each of random size, and infection can be transmitted
between two individuals if and only if there is a clique they both belong to.
Both the clique sizes and the number of cliques an individual belongs to follow
mixed Poisson distributions. An infinite-type branching process approximation
(with type being given by the length of an individual's infectious period) for
the early stages of an epidemic is developed and made fully rigorous by proving
an associated limit theorem as the population size tends to infinity. This
leads to a threshold parameter , so that in a large population an epidemic
with few initial infectives can give rise to a large outbreak if and only if
. A functional equation for the survival probability of the
approximating infinite-type branching process is determined; if , this
equation has no nonzero solution, while if , it is shown to have
precisely one nonzero solution. A law of large numbers for the size of such a
large outbreak is proved by exploiting a single-type branching process that
approximates the size of the susceptibility set of a typical individual.Comment: Published in at http://dx.doi.org/10.1214/13-AAP942 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …