21,271 research outputs found

    An exponential open hashing function based on dynamical systems theory

    Get PDF
    In this paper an efficient open addressing hash function called exponential hashing is developed using concepts from dynamical systems theory and number theory. A comparison of exponential hashing versus a widely used double hash function is performed using an analysis based on Lyapunov exponents and entropy. Proofs of optimal table parameter choices are provided for a number of hash functions. We also demonstrate experimentally that exponential hashing nearly matches the performance of an optimal double hash function for uniform data distributions and performs significantly better for nonuniform data distributions. We show that exponential hashing exhibits a higher integer Lyapunov exponent and entropy than double hashing for initial data probes which offers one explanation for its improved performance on nonuniform data distributions

    Exponential Space Improvement for minwise Based Algorithms

    Get PDF
    In this paper we introduce a general framework that exponentially improves the space, the degree of independence, and the time needed by min-wise based algorithms. The authors, in SODA 2011, we introduced an exponential time improvement for min-wise based algorithms by defining and constructing an almost k-min-wise independent family of hash functions. Here we develop an alternative approach that achieves both exponential time and exponential space improvement. The new approach relaxes the need for approximately min-wise hash functions, hence gets around the Omega(log(1/epsilon)) independence lower bound in [Patrascu 2010]. This is done by defining and constructing a d-k-min-wise independent family of hash functions. Surprisingly, for most cases only 8-wise independence is needed for the additional improvement. Moreover, as the degree of independence is a small constant, our function can be implemented efficiently. Informally, under this definition, all subsets of size d of any fixed set X have an equal probability to have hash values among the minimal k values in X, where the probability is over the random choice of hash function from the family. This property measures the randomness of the family, as choosing a truly random function, obviously, satisfies the definition for d=k=|X|. We define and give an efficient time and space construction of approximately d-k-min-wise independent family of hash functions for the case where d=2, as this is sufficient for the additional exponential improvement. We discuss how this construction can be used to improve many min-wise based algorithms. To our knowledge such definitions, for hash functions, were never studied and no construction was given before. As an example we show how to apply it for similarity and rarity estimation over data streams. Other min-wise based algorithms, can be adjusted in the same way

    Higher-order Count Sketch: Dimensionality Reduction That Retains Efficient Tensor Operations

    Get PDF
    Sketching is a randomized dimensionality-reduction method that aims to preserve relevant information in large-scale datasets. Count sketch is a simple popular sketch which uses a randomized hash function to achieve compression. In this paper, we propose a novel extension known as Higher-order Count Sketch (HCS). While count sketch uses a single hash function, HCS uses multiple (smaller) hash functions for sketching. HCS reshapes the input (vector) data into a higher-order tensor and employs a tensor product of the random hash functions to compute the sketch. This results in an exponential saving (with respect to the order of the tensor) in the memory requirements of the hash functions, under certain conditions on the input data. Furthermore, when the input data itself has an underlying structure in the form of various tensor representations such as the Tucker decomposition, we obtain significant advantages. We derive efficient (approximate) computation of various tensor operations such as tensor products and tensor contractions directly on the sketched data. Thus, HCS is the first sketch to fully exploit the multi-dimensional nature of higher-order tensors. We apply HCS to tensorized neural networks where we replace fully connected layers with sketched tensor operations. We achieve nearly state of the art accuracy with significant compression on the image classification benchmark

    A Formula That Generates Hash Collisions

    Full text link
    We present an explicit formula that produces hash collisions for the Merkle-Damg{\aa}rd construction. The formula works for arbitrary choice of message block and irrespective of the standardized constants used in hash functions, although some padding schemes may cause the formula to fail. This formula bears no obvious practical implications because at least one of any pair of colliding messages will have length double exponential in the security parameter. However, due to ambiguity in existing definitions of collision resistance, this formula arguably breaks the collision resistance of some hash functions.Comment: 10 page
    • …
    corecore