225 research outputs found

    Noise Sensitivity of the Minimum Spanning Tree of the Complete Graph

    Full text link
    We study the noise sensitivity of the minimum spanning tree (MST) of the nn-vertex complete graph when edges are assigned independent random weights. It is known that when the graph distance is rescaled by n1/3n^{1/3} and vertices are given a uniform measure, the MST converges in distribution in the Gromov-Hausdorff-Prokhorov (GHP) topology. We prove that if the weight of each edge is resampled independently with probability εn1/3\varepsilon\gg n^{-1/3}, then the pair of rescaled minimum spanning trees, before and after the noise, converges in distribution to independent random spaces. Conversely, if εn1/3\varepsilon\ll n^{-1/3}, the GHP distance between the rescaled trees goes to 00 in probability. This implies the noise sensitivity and stability for every property of the MST seen in the scaling limit, e.g., whether the diameter exceeds its median. The noise threshold of n1/3n^{-1/3} coincides with the critical window of the Erd\H{o}s-R\'enyi random graphs. In fact, these results follow from an analog theorem we prove regarding the minimum spanning forest of critical random graphs

    The Tensor Track, III

    Full text link
    We provide an informal up-to-date review of the tensor track approach to quantum gravity. In a long introduction we describe in simple terms the motivations for this approach. Then the many recent advances are summarized, with emphasis on some points (Gromov-Hausdorff limit, Loop vertex expansion, Osterwalder-Schrader positivity...) which, while important for the tensor track program, are not detailed in the usual quantum gravity literature. We list open questions in the conclusion and provide a rather extended bibliography.Comment: 53 pages, 6 figure

    An extensive English language bibliography on graph theory and its applications

    Get PDF
    Bibliography on graph theory and its application

    Successive minimum spanning trees

    Get PDF
    In a complete graph Kn with edge weights drawn independently from a uniform distribution U(0,1) (or alternatively an exponential distribution Exp(1)), let T1 be the MST (the spanning tree of minimum weight) and let Tk be the MST after deletion of the edges of all previous trees Ti, i<k. We show that each tree's weight w(Tk) converges in probability to a constant γk with 2k−2k−−√<γk<2k+2k−−√, and we conjecture that γk=2k−1+o(1). The problem is distinct from that of Frieze and Johansson (2018), finding k MSTs of combined minimum weight, and for k=2 ours has strictly larger cost. Our results also hold (and mostly are derived) in a multigraph model where edge weights for each vertex pair follow a Poisson process; here we additionally have E(w(Tk))→γk. Thinking of an edge of weight w as arriving at time t=nw, Kruskal's algorithm defines forests Fk(t), each initially empty and eventually equal to Tk, with each arriving edge added to the first Fk(t) where it does not create a cycle. Using tools of inhomogeneous random graphs we obtain structural results including that C1(Fk(t))/n, the fraction of vertices in the largest component of Fk(t), converges in probability to a function ρk(t), uniformly for all t, and that a giant component appears in Fk(t) at a time t=σk. We conjecture that the functions ρk tend to time translations of a single function, ρk(2k+x)→ρ∞(x) as k→∞, uniformly in x∈R. Simulations and numerical computations give estimated values of γk for small k, and support the conjectures just stated

    Group Field theory and Tensor Networks: towards a Ryu-Takayanagi formula in full quantum gravity

    Full text link
    We establish a dictionary between group field theory (thus, spin networks and random tensors) states and generalized random tensor networks. Then, we use this dictionary to compute the R\'{e}nyi entropy of such states and recover the Ryu-Takayanagi formula, in two different cases corresponding to two different truncations/approximations, suggested by the established correspondence.Comment: 54 pages, 10 figures; v2: replace figure 1 with a new version. Matches submitted version. v3: remove Renyi entropy computation on the random tensor network, focusing on GFT computation and interpretatio

    On The Growth Of Permutation Classes

    Get PDF
    We study aspects of the enumeration of permutation classes, sets of permutations closed downwards under the subpermutation order. First, we consider monotone grid classes of permutations. We present procedures for calculating the generating function of any class whose matrix has dimensions m × 1 for some m, and of acyclic and unicyclic classes of gridded permutations. We show that almost all large permutations in a grid class have the same shape, and determine this limit shape. We prove that the growth rate of a grid class is given by the square of the spectral radius of an associated graph and deduce some facts relating to the set of grid class growth rates. In the process, we establish a new result concerning tours on graphs. We also prove a similar result relating the growth rate of a geometric grid class to the matching polynomial of a graph, and determine the effect of edge subdivision on the matching polynomial. We characterise the growth rates of geometric grid classes in terms of the spectral radii of trees. We then investigate the set of growth rates of permutation classes and establish a new upper bound on the value above which every real number is the growth rate of some permutation class. In the process, we prove new results concerning expansions of real numbers in non-integer bases in which the digits are drawn from sets of allowed values. Finally, we introduce a new enumeration technique, based on associating a graph with each permutation, and determine the generating functions for some previously unenumerated classes. We conclude by using this approach to provide an improved lower bound on the growth rate of the class of permutations avoiding the pattern 1324. In the process, we prove that, asymptotically, patterns in Łukasiewicz paths exhibit a concentrated Gaussian distribution

    The double scaling limit of random tensor models

    Full text link
    Tensor models generalize matrix models and generate colored triangulations of pseudo-manifolds in dimensions D3D\geq 3. The free energies of some models have been recently shown to admit a double scaling limit, i.e. large tensor size NN while tuning to criticality, which turns out to be summable in dimension less than six. This double scaling limit is here extended to arbitrary models. This is done by means of the Schwinger--Dyson equations, which generalize the loop equations of random matrix models, coupled to a double scale analysis of the cumulants.Comment: 37 pages, 13 figures; several references were added. A new subsection was added to first present all the results (before the technical proofs which will follow). A misprint was correcte
    corecore