38 research outputs found

    Hardness Preserving Reductions via Cuckoo Hashing

    Get PDF
    The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain U\mathcal{U}, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a birthday attack : after U\sqrt{|\mathcal{U}|} queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF, can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs

    Bloom Filters in Adversarial Environments

    Get PDF
    Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as "Bloom filter" and prove a tight connection between Bloom filters in this model and cryptography. A Bloom filter represents a set SS of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any xSx \in S it should always answer `Yes', and for any xSx \notin S it should answer `Yes' only with small probability. In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the number of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size nn and error ε\varepsilon, that is secure against tt queries and uses only O(nlog1ε+t)O(n \log{\frac{1}{\varepsilon}}+t) bits of memory. In comparison, nlog1εn\log{\frac{1}{\varepsilon}} is the best possible under a non-adaptive adversary

    Primary-Secondary-Resolver Membership Proof Systems

    Get PDF
    We consider Primary-Secondary-Resolver Membership Proof Systems (PSR for short) and show different constructions of that primitive. A PSR system is a 3-party protocol, where we have a primary, which is a trusted party which commits to a set of members and their values, then generates a public and secret keys in order for secondaries (provers with knowledge of both keys) and resolvers (verifiers who only know the public key) to engage in interactive proof sessions regarding elements in the universe and their values. The motivation for such systems is for constructing a secure Domain Name System (DNSSEC) that does not reveal any unnecessary information to its clients. We require our systems to be complete, so honest executions will result in correct conclusions by the resolvers, sound, so malicious secondaries cannot cheat resolvers, and zero-knowledge, so resolvers will not learn additional information about elements they did not query explicitly. Providing proofs of membership is easy, as the primary can simply precompute signatures over all the members of the set. Providing proofs of non-membership, i.e. a denial-of-existence mechanism, is trickier and is the main issue in constructing PSR systems. We provide three different strategies to construct a denial of existence mechanism. The first uses a set of cryptographic keys for all elements of the universe which are not members, which we implement using hierarchical identity based encryption and a tree based signature scheme. The second construction uses cuckoo hashing with a stash, where in order to prove non-membership, a secondary must prove that a search for it will fail, i.e. that it is not in the tables or the stash of the cuckoo hashing scheme. The third uses a verifiable ``random looking\u27\u27 function which the primary evaluates over the set of members, then signs the values lexicographically and secondaries then use those signatures to prove to resolvers that the value of the non-member was not signed by the primary. We implement this function using a weaker variant of verifiable random/unpredictable functions and pseudorandom functions with interactive zero knowledge proofs. For all three constructions we suggest fairly efficient implementations, of order comparable to other public-key operations such as signatures and encryption. The first approach offers perfect ZK and does not reveal the size of the set in question, the second can be implemented based on very solid cryptographic assumptions and uses the unique structure of cuckoo hashing, while the last technique has the potential to be highly efficient, if one could construct an efficient and secure VRF/VUF or if one is willing to live in the random oracle model

    Cuckoo Hashing in Cryptography: Optimal Parameters, Robustness and Applications

    Get PDF
    Cuckoo hashing is a powerful primitive that enables storing items using small space with efficient querying. At a high level, cuckoo hashing maps nn items into bb entries storing at most \ell items such that each item is placed into one of kk randomly chosen entries. Additionally, there is an overflow stash that can store at most ss items. Many cryptographic primitives rely upon cuckoo hashing to privately embed and query data where it is integral to ensure small failure probability when constructing cuckoo hashing tables as it directly relates to the privacy guarantees. As our main result, we present a more query-efficient cuckoo hashing construction using more hash functions. For construction failure probability ϵ\epsilon, the query overhead of our scheme is O(1+log(1/ϵ)/logn)O(1 + \sqrt{\log(1/\epsilon)/\log n}). Our scheme has quadratically smaller query overhead than prior works for any target failure probability ϵ\epsilon. We also prove lower bounds matching our construction. Our improvements come from a new understanding of the locality of cuckoo hashing failures for small sets of items. We also initiate the study of robust cuckoo hashing where the input set may be chosen with knowledge of the hash functions. We present a cuckoo hashing scheme using more hash functions with query overhead O~(logλ)\tilde{O}(\log \lambda) that is robust against poly(λ)(\lambda) adversaries. Furthermore, we present lower bounds showing that this construction is tight and that extending previous approaches of large stashes or entries cannot obtain robustness except with Ω(n)\Omega(n) query overhead. As applications of our results, we obtain improved constructions for batch codes and PIR. In particular, we present the most efficient explicit batch code and blackbox reduction from single-query PIR to batch PIR

    PIR with compressed queries and amortized query processing

    Get PDF
    Private information retrieval (PIR) is a key building block in many privacy-preserving systems. Unfortunately, existing constructions remain very expensive. This paper introduces two techniques that make the computational variant of PIR (CPIR) more efficient in practice. The first technique targets a recent class of CPU-efficient CPIR protocols where the query sent by the client contains a number of ciphertexts proportional to the size of the database. We show how to compresses this query, achieving size reductions of up to 274X. The second technique is a new data encoding called probabilistic batch codes (PBCs). We use PBCs to build a multi-query PIR scheme that allows the server to amortize its computational cost when processing a batch of requests from the same client. This technique achieves up to 40× speedup over processing queries one at a time, and is significantly more efficient than related encodings. We apply our techniques to the Pung private communication system, which relies on a custom multi-query CPIR protocol for its privacy guarantees. By porting our techniques to Pung, we find that we can simultaneously reduce network costs by 36× and increase throughput by 3X

    Private Set Intersection with Linear Communication from General Assumptions

    Get PDF
    This work presents a hashing-based algorithm for Private Set Intersection (PSI) in the honest-but-curious setting. The protocol is generic, modular and provides both asymptotic and concrete efficiency improvements over existing PSI protocols. If each player has mm elements, our scheme requires only O(m \secpar) communication between the parties, where \secpar is a security parameter. Our protocol builds on the hashing-based PSI protocol of Pinkas et al. (USENIX 2014, USENIX 2015), but we replace one of the sub-protocols (handling the cuckoo ``stash\u27\u27) with a special-purpose PSI protocol that is optimized for comparing sets of unbalanced size. This brings the asymptotic communication complexity of the overall protocol down from \omega(m \secpar) to O(m\secpar), and provides concrete performance improvements (10-15\% reduction in communication costs) over Kolesnikov et al. (CCS 2016) under real-world parameter choices. Our protocol is simple, generic and benefits from the permutation-hashing optimizations of Pinkas et al. (USENIX 2015) and the Batched, Relaxed Oblivious Pseudo Random Functions of Kolesnikov et al. (CCS 2016)

    On the Query Complexity of Constructing PRFs from Non-adaptive PRFs

    Get PDF
    This paper studies constructions of pseudorandom functions (PRFs) from non-adaptive PRFs (naPRFs), i.e., PRFs which are secure only against distinguishers issuing all of their queries at once. Berman and Haitner (Journal of Cryptology, \u2715) gave a one-call construction which, however, is not hardness preserving -- to obtain a secure PRF (against polynomial-time distinguishers), they need to rely on a naPRF secure against superpolynomial-time distinguishers; in contrast, all known hardness-preserving constructions require ω(1)\omega(1) calls. This leaves open the question of whether a stronger superpolynomial-time assumption is necessary for one-call (or constant-call) approaches. Here, we show that a large class of one-call constructions (which in particular includes the one of Berman and Haitner) cannot be proved to be a secure PRF under a black-box reduction to the (polynomial-time) naPRF security of the underlying function. Our result complements existing impossibility results (Myers, EUROCRYPT \u2704; Pietrzak, CRYPTO \u2705) ruling out natural specific approaches, such as parallel and sequential composition. Furthermore, we show that our techniques extend to rule out a natural class of constructions making parallel but arbitrary number of calls which in particular includes parallel composition and the two-call, cuckoo-hashing based construction of Berman et al.\ (Journal of Cryptology, \u2719)

    16th Scandinavian Symposium and Workshops on Algorithm Theory: SWAT 2018, June 18-20, 2018, Malmö University, Malmö, Sweden

    Get PDF

    SoK: Oblivious Pseudorandom Functions

    Get PDF
    In recent years, oblivious pseudorandom functions (OPRFs) have become a ubiquitous primitive used in cryptographic protocols and privacy-preserving technologies. The growing interest in OPRFs, both theoretical and applied, has produced a vast number of different constructions and functionality variations. In this paper, we provide a systematic overview of how to build and use OPRFs. We first categorize existing OPRFs into essentially four families based on their underlying PRF (Naor-Reingold, Dodis-Yampolskiy, Hashed Diffie-Hellman, and generic constructions). This categorization allows us to give a unified presentation of all oblivious evaluation methods in the literature, and to understand which properties OPRFs can (or cannot) have. We further demonstrate the theoretical and practical power of OPRFs by visualizing them in the landscape of cryptographic primitives, and by providing a comprehensive overview of how OPRFs are leveraged for improving the privacy of internet users. Our work systematizes 15 years of research on OPRFs and provides inspiration for new OPRF constructions and applications thereof
    corecore