494 research outputs found

    Exactly solvable model with two conductor-insulator transitions driven by impurities

    Full text link
    We present an exact analysis of two conductor-insulator transitions in the random graph model. The average connectivity is related to the concentration of impurities. The adjacency matrix of a large random graph is used as a hopping Hamiltonian. Its spectrum has a delta peak at zero energy. Our analysis is based on an explicit expression for the height of this peak, and a detailed description of the localized eigenvectors and of their contribution to the peak. Starting from the low connectivity (high impurity density) regime, one encounters an insulator-conductor transition for average connectivity 1.421529... and a conductor-insulator transition for average connectivity 3.154985.... We explain the spectral singularity at average connectivity e=2.718281... and relate it to another enumerative problem in random graph theory, the minimal vertex cover problem.Comment: 4 pages revtex, 2 fig.eps [v2: new title, changed intro, reorganized text

    Discordant voting processes on finite graphs

    Get PDF
    We consider an asynchronous voting process on graphs which we call discordant voting, and which can be described as follows. Initially each vertex holds one of two opinions, red or blue say. Neighbouring vertices with different opinions interact pairwise. After an interaction both vertices have the same colour. The quantity of interest is T, the time to reach consensus, i.e. the number of interactions needed for all vertices have the same colour. An edge whose endpoint colours differ (i.e. one vertex is coloured red and the other one blue) is said to be discordant. A vertex is discordant if its is incident with a discordant edge. In discordant voting, all interactions are based on discordant edges. Because the voting process is asynchronous there are several ways to update the colours of the interacting vertices. Push: Pick a random discordant vertex and push its colour to a random discordant neighbour. Pull: Pick a random discordant vertex and pull the colour of a random discordant neighbour. Oblivious: Pick a random endpoint of a random discordant edge and push the colour to the other end point. We show that ET, the expected time to reach consensus, depends strongly on the underlying graph and the update rule. For connected graphs on n vertices, and an initial half red, half blue colouring the following hold. For oblivious voting, ET = n2/4 independent of the underlying graph. For the complete graph Kn, the push protocol has ET = =(n log n), whereas the pull protocol has ET = =(2n). For the cycle Cn all three protocols have ET = =(n2). For the star graph however, the pull protocol has ET = O(n2), whereas the push protocol is slower with ET = =(n2 log n). The wide variation in ET for the pull protocol is to be contrasted with the well known model of synchronous pull voting, for which ET = O(n) on many classes of expanders

    The decimation process in random k-SAT

    Full text link
    Let F be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-rigorous statistical mechanics ideas have inspired a message passing algorithm called Belief Propagation Guided Decimation for finding satisfying assignments of F. This algorithm can be viewed as an attempt at implementing a certain thought experiment that we call the Decimation Process. In this paper we identify a variety of phase transitions in the decimation process and link these phase transitions to the performance of the algorithm

    Wear Minimization for Cuckoo Hashing: How Not to Throw a Lot of Eggs into One Basket

    Full text link
    We study wear-leveling techniques for cuckoo hashing, showing that it is possible to achieve a memory wear bound of loglogn+O(1)\log\log n+O(1) after the insertion of nn items into a table of size CnCn for a suitable constant CC using cuckoo hashing. Moreover, we study our cuckoo hashing method empirically, showing that it significantly improves on the memory wear performance for classic cuckoo hashing and linear probing in practice.Comment: 13 pages, 1 table, 7 figures; to appear at the 13th Symposium on Experimental Algorithms (SEA 2014

    Spatial Mixing of Coloring Random Graphs

    Full text link
    We study the strong spatial mixing (decay of correlation) property of proper qq-colorings of random graph G(n,d/n)G(n, d/n) with a fixed dd. The strong spatial mixing of coloring and related models have been extensively studied on graphs with bounded maximum degree. However, for typical classes of graphs with bounded average degree, such as G(n,d/n)G(n, d/n), an easy counterexample shows that colorings do not exhibit strong spatial mixing with high probability. Nevertheless, we show that for qαd+βq\ge\alpha d+\beta with α>2\alpha>2 and sufficiently large β=O(1)\beta=O(1), with high probability proper qq-colorings of random graph G(n,d/n)G(n, d/n) exhibit strong spatial mixing with respect to an arbitrarily fixed vertex. This is the first strong spatial mixing result for colorings of graphs with unbounded maximum degree. Our analysis of strong spatial mixing establishes a block-wise correlation decay instead of the standard point-wise decay, which may be of interest by itself, especially for graphs with unbounded degree

    Trajectories in phase diagrams, growth processes and computational complexity: how search algorithms solve the 3-Satisfiability problem

    Full text link
    Most decision and optimization problems encountered in practice fall into one of two categories with respect to any particular solving method or algorithm: either the problem is solved quickly (easy) or else demands an impractically long computational effort (hard). Recent investigations on model classes of problems have shown that some global parameters, such as the ratio between the constraints to be satisfied and the adjustable variables, are good predictors of problem hardness and, moreover, have an effect analogous to thermodynamical parameters, e.g. temperature, in predicting phases in condensed matter physics [Monasson et al., Nature 400 (1999) 133-137]. Here we show that changes in the values of such parameters can be tracked during a run of the algorithm defining a trajectory through the parameter space. Focusing on 3-Satisfiability, a recognized representative of hard problems, we analyze trajectories generated by search algorithms using growth processes statistical physics. These trajectories can cross well defined phases, corresponding to domains of easy or hard instances, and allow to successfully predict the times of resolution.Comment: Revtex file + 4 eps figure

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,ϵ)(s,\epsilon)-indistinguishable we need the complexity O(s25ϵ2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϵ2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    Fast Scalable Construction of (Minimal Perfect Hash) Functions

    Full text link
    Recent advances in random linear systems on finite fields have paved the way for the construction of constant-time data structures representing static functions and minimal perfect hash functions using less space with respect to existing techniques. The main obstruction for any practical application of these results is the cubic-time Gaussian elimination required to solve these linear systems: despite they can be made very small, the computation is still too slow to be feasible. In this paper we describe in detail a number of heuristics and programming techniques to speed up the resolution of these systems by several orders of magnitude, making the overall construction competitive with the standard and widely used MWHC technique, which is based on hypergraph peeling. In particular, we introduce broadword programming techniques for fast equation manipulation and a lazy Gaussian elimination algorithm. We also describe a number of technical improvements to the data structure which further reduce space usage and improve lookup speed. Our implementation of these techniques yields a minimal perfect hash function data structure occupying 2.24 bits per element, compared to 2.68 for MWHC-based ones, and a static function data structure which reduces the multiplicative overhead from 1.23 to 1.03

    Parameter estimators of random intersection graphs with thinned communities

    Full text link
    This paper studies a statistical network model generated by a large number of randomly sized overlapping communities, where any pair of nodes sharing a community is linked with probability qq via the community. In the special case with q=1q=1 the model reduces to a random intersection graph which is known to generate high levels of transitivity also in the sparse context. The parameter qq adds a degree of freedom and leads to a parsimonious and analytically tractable network model with tunable density, transitivity, and degree fluctuations. We prove that the parameters of this model can be consistently estimated in the large and sparse limiting regime using moment estimators based on partially observed densities of links, 2-stars, and triangles.Comment: 15 page
    corecore