20 research outputs found

    On the insertion time of random walk cuckoo hashing

    Full text link
    Cuckoo Hashing is a hashing scheme invented by Pagh and Rodler. It uses d≥2d\geq 2 distinct hash functions to insert items into the hash table. It has been an open question for some time as to the expected time for Random Walk Insertion to add items. We show that if the number of hash functions d=O(1)d=O(1) is sufficiently large, then the expected insertion time is O(1)O(1) per item.Comment: 9 page

    Matchings on infinite graphs

    Full text link
    Elek and Lippner (2010) showed that the convergence of a sequence of bounded-degree graphs implies the existence of a limit for the proportion of vertices covered by a maximum matching. We provide a characterization of the limiting parameter via a local recursion defined directly on the limit of the graph sequence. Interestingly, the recursion may admit multiple solutions, implying non-trivial long-range dependencies between the covered vertices. We overcome this lack of correlation decay by introducing a perturbative parameter (temperature), which we let progressively go to zero. This allows us to uniquely identify the correct solution. In the important case where the graph limit is a unimodular Galton-Watson tree, the recursion simplifies into a distributional equation that can be solved explicitly, leading to a new asymptotic formula that considerably extends the well-known one by Karp and Sipser for Erd\"os-R\'enyi random graphs.Comment: 23 page

    Towards Optimal Degree-distributions for Left-perfect Matchings in Random Bipartite Graphs

    Full text link
    Consider a random bipartite multigraph GG with nn left nodes and m≥n≥2m \geq n \geq 2 right nodes. Each left node xx has dx≥1d_x \geq 1 random right neighbors. The average left degree Δ\Delta is fixed, Δ≥2\Delta \geq 2. We ask whether for the probability that GG has a left-perfect matching it is advantageous not to fix dxd_x for each left node xx but rather choose it at random according to some (cleverly chosen) distribution. We show the following, provided that the degrees of the left nodes are independent: If Δ\Delta is an integer then it is optimal to use a fixed degree of Δ\Delta for all left nodes. If Δ\Delta is non-integral then an optimal degree-distribution has the property that each left node xx has two possible degrees, \floor{\Delta} and \ceil{\Delta}, with probability pxp_x and 1−px1-p_x, respectively, where pxp_x is from the closed interval [0,1][0,1] and the average over all pxp_x equals \ceil{\Delta}-\Delta. Furthermore, if n=c⋅mn=c\cdot m and Δ>2\Delta>2 is constant, then each distribution of the left degrees that meets the conditions above determines the same threshold c∗(Δ)c^*(\Delta) that has the following property as nn goes to infinity: If c<c∗(Δ)c<c^*(\Delta) then there exists a left-perfect matching with high probability. If c>c∗(Δ)c>c^*(\Delta) then there exists no left-perfect matching with high probability. The threshold c∗(Δ)c^*(\Delta) is the same as the known threshold for offline kk-ary cuckoo hashing for integral or non-integral k=Δk=\Delta

    On randomness in Hash functions

    Get PDF
    In the talk, we shall discuss quality measures for hash functions used in data structures and algorithms, and survey positive and negative results. (This talk is not about cryptographic hash functions.) For the analysis of algorithms involving hash functions, it is often convenient to assume the hash functions used behave fully randomly; in some cases there is no analysis known that avoids this assumption. In practice, one needs to get by with weaker hash functions that can be generated by randomized algorithms. A well-studied range of applications concern realizations of dynamic dictionaries (linear probing, chained hashing, dynamic perfect hashing, cuckoo hashing and its generalizations) or Bloom filters and their variants. A particularly successful and useful means of classification are Carter and Wegman's universal or k-wise independent classes, introduced in 1977. A natural and widely used approach to analyzing an algorithm involving hash functions is to show that it works if a sufficiently strong universal class of hash functions is used, and to substitute one of the known constructions of such classes. This invites research into the question of just how much independence in the hash functions is necessary for an algorithm to work. Some recent analyses that gave impossibility results constructed rather artificial classes that would not work; other results pointed out natural, widely used hash classes that would not work in a particular application. Only recently it was shown that under certain assumptions on some entropy present in the set of keys even 2-wise independent hash classes will lead to strong randomness properties in the hash values. The negative results show that these results may not be taken as justification for using weak hash classes indiscriminately, in particular for key sets with structure. When stronger independence properties are needed for a theoretical analysis, one may resort to classic constructions. Only in 2003 it was found out how full randomness can be simulated using only linear space overhead (which is optimal). The "split-and-share" approach can be used to justify the full randomness assumption in some situations in which full randomness is needed for the analysis to go through, like in many applications involving multiple hash functions (e.g., generalized versions of cuckoo hashing with multiple hash functions or larger bucket sizes, load balancing, Bloom filters and variants, or minimal perfect hash function constructions). For practice, efficiency considerations beyond constant factors are important. It is not hard to construct very efficient 2-wise independent classes. Using k-wise independent classes for constant k bigger than 3 has become feasible in practice only by new constructions involving tabulation. This goes together well with the quite new result that linear probing works with 5-independent hash functions. Recent developments suggest that the classification of hash function constructions by their degree of independence alone may not be adequate in some cases. Thus, one may want to analyze the behavior of specific hash classes in specific applications, circumventing the concept of k-wise independence. Several such results were recently achieved concerning hash functions that utilize tabulation. In particular if the analysis of the application involves using randomness properties in graphs and hypergraphs (generalized cuckoo hashing, also in the version with a "stash", or load balancing), a hash class combining k-wise independence with tabulation has turned out to be very powerful

    Tight Thresholds for Cuckoo Hashing via XORSAT

    Full text link
    We settle the question of tight thresholds for offline cuckoo hashing. The problem can be stated as follows: we have n keys to be hashed into m buckets each capable of holding a single key. Each key has k >= 3 (distinct) associated buckets chosen uniformly at random and independently of the choices of other keys. A hash table can be constructed successfully if each key can be placed into one of its buckets. We seek thresholds alpha_k such that, as n goes to infinity, if n/m <= alpha for some alpha < alpha_k then a hash table can be constructed successfully with high probability, and if n/m >= alpha for some alpha > alpha_k a hash table cannot be constructed successfully with high probability. Here we are considering the offline version of the problem, where all keys and hash values are given, so the problem is equivalent to previous models of multiple-choice hashing. We find the thresholds for all values of k > 2 by showing that they are in fact the same as the previously known thresholds for the random k-XORSAT problem. We then extend these results to the setting where keys can have differing number of choices, and provide evidence in the form of an algorithm for a conjecture extending this result to cuckoo hash tables that store multiple keys in a bucket.Comment: Revision 3 contains missing details of proofs, as appendix

    Thresholds for Extreme Orientability

    Full text link
    Multiple-choice load balancing has been a topic of intense study since the seminal paper of Azar, Broder, Karlin, and Upfal. Questions in this area can be phrased in terms of orientations of a graph, or more generally a k-uniform random hypergraph. A (d,b)-orientation is an assignment of each edge to d of its vertices, such that no vertex has more than b edges assigned to it. Conditions for the existence of such orientations have been completely documented except for the "extreme" case of (k-1,1)-orientations. We consider this remaining case, and establish: - The density threshold below which an orientation exists with high probability, and above which it does not exist with high probability. - An algorithm for finding an orientation that runs in linear time with high probability, with explicit polynomial bounds on the failure probability. Previously, the only known algorithms for constructing (k-1,1)-orientations worked for k<=3, and were only shown to have expected linear running time.Comment: Corrected description of relationship to the work of LeLarg
    corecore