1,442 research outputs found

    Entropy of Some Models of Sparse Random Graphs With Vertex-Names

    Full text link
    Consider the setting of sparse graphs on N vertices, where the vertices have distinct "names", which are strings of length O(log N) from a fixed finite alphabet. For many natural probability models, the entropy grows as cN log N for some model-dependent rate constant c. The mathematical content of this paper is the (often easy) calculation of c for a variety of models, in particular for various standard random graph models adapted to this setting. Our broader purpose is to publicize this particular setting as a natural setting for future theoretical study of data compression for graphs, and (more speculatively) for discussion of unorganized versus organized complexity.Comment: 31 page

    The Incipient Giant Component in Bond Percolation on General Finite Weighted Graphs

    Full text link
    On a large finite connected graph let edges ee become "open" at independent random Exponential times of arbitrary rates wew_e. Under minimal assumptions, the time at which a giant component starts to emerge is weakly concentrated around its mean

    Weak Concentration for First Passage Percolation Times on Graphs and General Increasing Set-valued Processes

    Full text link
    A simple lemma bounds s.d.(T)/ET\mathrm{s.d.}(T)/\mathbb{E} T for hitting times TT in Markov chains with a certain strong monotonicity property. We show how this lemma may be applied to several increasing set-valued processes. Our main result concerns a model of first passage percolation on a finite graph, where the traversal times of edges are independent Exponentials with arbitrary rates. Consider the percolation time XX between two arbitrary vertices. We prove that s.d.(X)/EX\mathrm{s.d.}(X)/\mathbb{E} X is small if and only if Ξ/EX\Xi/\mathbb{E} X is small, where Ξ\Xi is the maximal edge-traversal time in the percolation path attaining XX

    Percolation-like Scaling Exponents for Minimal Paths and Trees in the Stochastic Mean Field Model

    Full text link
    In the mean field (or random link) model there are nn points and inter-point distances are independent random variables. For 0<<0 < \ell < \infty and in the nn \to \infty limit, let δ()=1/n×\delta(\ell) = 1/n \times (maximum number of steps in a path whose average step-length is \leq \ell). The function δ()\delta(\ell) is analogous to the percolation function in percolation theory: there is a critical value =e1\ell_* = e^{-1} at which δ()\delta(\cdot) becomes non-zero, and (presumably) a scaling exponent β\beta in the sense δ()()β\delta(\ell) \asymp (\ell - \ell_*)^\beta. Recently developed probabilistic methodology (in some sense a rephrasing of the cavity method of Mezard-Parisi) provides a simple albeit non-rigorous way of writing down such functions in terms of solutions of fixed-point equations for probability distributions. Solving numerically gives convincing evidence that β=3\beta = 3. A parallel study with trees instead of paths gives scaling exponent β=2\beta = 2. The new exponents coincide with those found in a different context (comparing optimal and near-optimal solutions of mean-field TSP and MST) and reinforce the suggestion that these scaling exponents determine universality classes for optimization problems on random points.Comment: 19 page

    How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling

    Full text link
    Use each of n exact samples as the initial state for a MCMC sampler run for m steps. We give confidence intervals for accuracy of estimators which are always valid and which, in certain settings, are almost as good as the intervals one would obtain if the (unknown) mixing time of the chain were known.Comment: 14 page

    A survey of max-type recursive distributional equations

    Full text link
    In certain problems in a variety of applied probability settings (from probabilistic analysis of algorithms to statistical physics), the central requirement is to solve a recursive distributional equation of the form X =^d g((\xi_i,X_i),i\geq 1). Here (\xi_i) and g(\cdot) are given and the X_i are independent copies of the unknown distribution X. We survey this area, emphasizing examples where the function g(\cdot) is essentially a ``maximum'' or ``minimum'' function. We draw attention to the theoretical question of endogeny: in the associated recursive tree process X_i, are the X_i measurable functions of the innovations process (\xi_i)?Comment: Published at http://dx.doi.org/10.1214/105051605000000142 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A critical branching process model for biodiversity

    Full text link
    Motivated as a null model for comparison with data, we study the following model for a phylogenetic tree on nn extant species. The origin of the clade is a random time in the past, whose (improper) distribution is uniform on (0,)(0,\infty). After that origin, the process of extinctions and speciations is a continuous-time critical branching process of constant rate, conditioned on having the prescribed number nn of species at the present time. We study various mathematical properties of this model as nn \to \infty limits: time of origin and of most recent common ancestor; pattern of divergence times within lineage trees; time series of numbers of species; number of extinct species in total, or ancestral to extant species; and "local" structure of the tree itself. We emphasize several mathematical techniques: associating walks with trees, a point process representation of lineage trees, and Brownian limits.Comment: 31 pages, 7 figure

    RiffleScrambler - a memory-hard password storing function

    Full text link
    We introduce RiffleScrambler: a new family of directed acyclic graphs and a corresponding data-independent memory hard function with password independent memory access. We prove its memory hardness in the random oracle model. RiffleScrambler is similar to Catena -- updates of hashes are determined by a graph (bit-reversal or double-butterfly graph in Catena). The advantage of the RiffleScrambler over Catena is that the underlying graphs are not predefined but are generated per salt, as in Balloon Hashing. Such an approach leads to higher immunity against practical parallel attacks. RiffleScrambler offers better efficiency than Balloon Hashing since the in-degree of the underlying graph is equal to 3 (and is much smaller than in Ballon Hashing). At the same time, because the underlying graph is an instance of a Superconcentrator, our construction achieves the same time-memory trade-offs.Comment: Accepted to ESORICS 201
    corecore