28 research outputs found

    Expected Length of the Longest Chain in Linear Hashing

    Get PDF
    Hash table with chaining is a data structure that chains objects with identical hash values together with an entry or a memory address. It works by calculating a hash value from an input then placing the input in the hash table entry. When we place two inputs in the same entry, they chain together in a linear linked list. We are interested in the expected length of the longest chain in linear hashing and methods to reduce the length because the worst-case look-up time is directly proportional to it. The linear hash function used to calculate hash value is defined by ax+b mod p mod m, for any x ∈ {0,1, . . . , p−1} and a, b chosen uniformly at random from the set {0,1, . . . , p−1}, where p is a prime and p≥m. This class of hash functions is a 2-wise independent hash function family. For any 2-wise independent hash functions, the expected length of the longest chain is O(n1/2). Additionally, Alon et al. (JACM 1999) proved that when using a similar class of 2-wise independent hash function, the expected length of the longest chain has a tight lower bound of Ω(n1/2). Recently, in 2016, Knudsen (FOCS 2016) showed that the upper bound of the expected length of the longest chain of the linear hashing function is surprisingly n1/3+o(1). This bound is strictly better than O(n1/2), which, due to Alon et al.’s result, is already known to be tight for 2-wise independent hash functions. Consequently, there are exclusive properties of the linear hashing function, in addition to being 2-wise independent, that results in this phenomenon. Even though Knudsen’s upper bound on the expected length of the longest chain is remarkable, it is still unknown whether it is tight. In other words, does there exist a set of n inputs such that, when hashed using the linear hash function, the expected length of the longest chain is roughly n1/3. If Knudsen’s bound is not tight, then there is an additional motivation to study further and tighten the upper bound. Another focus of our research is to reduce the expected length of the longest chain by using the load balancing power of “two choices.” The idea is, instead of choosing one bin (hash table entry) for a ball (input), to choose two or more bins and put the ball in the bin with the least load at that moment. Mitzenmacher et al. proved that the power of two choices exponentially improves the expected max-load (from Θ(log n/log log n)) to Θ(log log n)) for the hash table that uses two truly random hash functions. We shall conduct an empirical study by simulation with SageMath (System for Algebra and Geometry Experimentation) to verify whether similar improvements are observed for the linear hash function as well. We anticipate that the length of the longest chain of our linear hash table can be significantly improved when used with two linear hash functions

    Computational Hardness of Optimal FairComputation: Beyond Minicrypt

    Get PDF
    Secure multi-party computation allows mutually distrusting parties to compute securely over their private data. However, guaranteeing output delivery to honest parties when the adversarial parties may abort the protocol has been a challenging objective. As a representative task, this work considers two-party coin-tossing protocols with guaranteed output delivery, a.k.a., fair coin-tossing. In the information-theoretic plain model, as in two-party zero-sum games, one of the parties can force an output with certainty. In the commitment-hybrid, any rr-message coin-tossing protocol is 1/r{1/\sqrt r}-unfair, i.e., the adversary can change the honest party\u27s output distribution by 1/r1/\sqrt r in the statistical distance. Moran, Naor, and Segev (TCC--2009) constructed the first 1/r1/r-unfair protocol in the oblivious transfer-hybrid. No further security improvement is possible because Cleve (STOC--1986) proved that 1/r1/r-unfairness is unavoidable. Therefore, Moran, Naor, and Segev\u27s coin-tossing protocol is optimal. However, is oblivious transfer necessary for optimal fair coin-tossing? Maji and Wang (CRYPTO--2020) proved that any coin-tossing protocol using one-way functions in a black-box manner is at least 1/r1/\sqrt r-unfair. That is, optimal fair coin-tossing is impossible in Minicrypt. Our work focuses on tightly characterizing the hardness of computation assumption necessary and sufficient for optimal fair coin-tossing within Cryptomania, outside Minicrypt. Haitner, Makriyannia, Nissim, Omri, Shaltiel, and Silbak (FOCS--2018 and TCC--2018) proved that better than 1/r1/\sqrt r-unfairness, for any constant rr, implies the existence of a key-agreement protocol. We prove that any coin-tossing protocol using public-key encryption (or, multi-round key agreement protocols) in a black-box manner must be 1/r1/\sqrt r-unfair. Next, our work entirely characterizes the additional power of secure function evaluation functionalities for optimal fair coin-tossing. We augment the model with an idealized secure function evaluation of ff, \aka, the ff-hybrid. If ff is complete, that is, oblivious transfer is possible in the ff-hybrid, then optimal fair coin-tossing is also possible in the ff-hybrid. On the other hand, if ff is not complete, then a coin-tossing protocol using public-key encryption in a black-box manner in the ff-hybrid is at least 1/r1/\sqrt r-unfair

    Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing

    Get PDF
    A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) 1/r1/r (in the statistical distance) in an rr-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond 1/r1/r. In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions\u27 existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known. Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any rr-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party\u27s output distribution by 1/r1/\sqrt r by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the rr-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party\u27s output distribution by more than 1/r1/\sqrt r. Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model

    Optimally-secure Coin-tossing against a Byzantine Adversary

    Get PDF
    In their seminal work, Ben-Or and Linial (1985) introduced the full information model for collective coin-tossing protocols involving nn processors with unbounded computational power using a common broadcast channel for all their communications. The design and analysis of coin-tossing protocols in the full information model have close connections to diverse fields like extremal graph theory, randomness extraction, cryptographic protocol design, game theory, distributed protocols, and learning theory. Several works have focused on studying the asymptotically best attacks and optimal coin-tossing protocols in various adversarial settings. While one knows the characterization of the exact or asymptotically optimal protocols in some adversarial settings, for most adversarial settings, the optimal protocol characterization remains open. For the cases where the asymptotically optimal constructions are known, the exact constants or poly-logarithmic multiplicative factors involved are not entirely well-understood. In this work, we study nn-processor coin-tossing protocols where every processor broadcasts an arbitrary-length message once. Note that, in this setting, which processor speaks and its message distribution may depend on the messages broadcast so far. An adaptive Byzantine adversary, based on the messages broadcast so far, can corrupt k=1k=1 processor. A bias-XX coin-tossing protocol outputs 1 with probability XX; 0 with probability (1X)(1-X). For a coin-tossing protocol, its insecurity is the maximum change in the output distribution (in the statistical distance) that an adversarial strategy can cause. Our objective is to identify optimal bias-XX coin-tossing protocols with minimum insecurity, for every X[0,1]X\in[0,1]. Lichtenstein, Linial, and Saks (1989) studied bias-XX coin-tossing protocols in this adversarial model under the highly restrictive constraint that each party broadcasts an independent and uniformly random bit. The underlying message space is a well-behaved product space, and X[0,1]X\in[0,1] can only be integer multiples of 1/2n1/2^n, which is a discrete problem. The case where every processor broadcasts only an independent random bit admits simplifications, for example, the collective coin-tossing protocol must be monotone. Surprisingly, for this class of coin-tossing protocols, the objective of reducing an adversary’s ability to increase the expected output is equivalent to reducing an adversary’s ability to decrease the expected output. Building on these observations, Lichtenstein, Linial, and Saks proved that the threshold coin-tossing protocols are optimal for all nn and kk. In a sequence of works, Goldwasser, Kalai, and Park (2015), Kalai, Komargodski, and Raz (2018), and (independent of our work) Haitner and Karidi-Heller (2020) prove that k=\mathcal{O}\left(\sqrt n\cdot \polylog{n}\right) corruptions suffice to fix the output of any bias-X coin-tossing protocol. These results consider parties who send arbitrary-length messages, and each processor has multiple turns to reveal its entire message. However, optimal protocols robust to a large number of corruptions do not have any apriori relation to the optimal protocol robust to k=1k=1 corruption. Furthermore, to make an informed choice of employing a coin-tossing protocol in practice, for a fixed target tolerance of insecurity, one needs a precise characterization of the minimum insecurity achieved by these coin-tossing protocols. We rely on an inductive approach to constructing coin-tossing protocols to study a proxy potential function measuring the susceptibility of any bias-XX coin-tossing protocol to attacks in our adversarial model. Our technique is inherently constructive and yields protocols that minimize the potential function. It happens to be the case that threshold protocols minimize the potential function. We demonstrate that the insecurity of these threshold protocols is 2-approximate of the optimal protocol in our adversarial model. For any other X[0,1]X\in[0,1] that threshold protocols cannot realize, we prove that an appropriate (convex) combination of the threshold protocols is a 4-approximation of the optimal protocol

    SIM: Secure Interval Membership Testing and Applications to Secure Comparison

    Get PDF
    The offline-online model is a leading paradigm for practical secure multi-party computation (MPC) protocol design that has successfully reduced the overhead for several prevalent privacy-preserving computation functionalities common to diverse application domains. However, the prohibitive overheads associated with secure comparison -- one of these vital functionalities -- often bottlenecks current and envisioned MPC solutions. Indeed, an efficient secure comparison solution has the potential for significant real-world impact through its broad applications. This work identifies and presents SIM, a secure protocol for the functionality of interval membership testing. This security functionality, in particular, facilitates secure less-than-zero testing and, in turn, secure comparison. A key technical challenge is to support a fast online protocol for testing in large rings while keeping the precomputation tractable. Motivated by the map-reduce paradigm, this work introduces the innovation of (1) computing a sequence of intermediate functionalities on a partition of the input into input blocks and (2) securely aggregating the output from these intermediate outputs. This innovation allows controlling the size of the precomputation through a granularity parameter representing these input blocks\u27 size -- enabling application-specific automated compiler optimizations. To demonstrate our protocols\u27 efficiency, we implement and test their performance in a high-demand application: privacy-preserving machine learning. The benchmark results show that switching to our protocols yields significant performance improvement, which indicates that using our protocol in a plug-and-play fashion can improve the performance of various security applications. Our new paradigm of protocol design may be of independent interest because of its potential for extensions to other functionalities of practical interest

    Zeroizing without zeroes: Cryptanalyzing multilinear maps without encodings of zero

    Get PDF
    We extend the recent zeroizing attacks of Cheon et al. on multilinear maps to some settings where no encodings of zero below the maximal level are available. Some of the new attacks apply to the CLT scheme (resulting in total break) while others apply to the GGH scheme (resulting in a weak-DL attack)

    Decidability of Secure Non-interactive Simulation of Doubly Symmetric Binary Source

    Get PDF
    Noise, which cannot be eliminated or controlled by parties, is an incredible facilitator of cryptography. For example, highly efficient secure computation protocols based on independent samples from the doubly symmetric binary source (BSS) are known. A modular technique of extending these protocols to diverse forms of other noise without any loss of round and communication complexity is the following strategy. Parties, beginning with multiple samples from an arbitrary noise source, non-interactively, albeit securely, simulate the BSS samples. After that, they can use custom-designed efficient multi-party solutions using these BSS samples. Khorasgani, Maji, and Nguyen (EPRINT--2020) introduce the notion of secure non-interactive simulation (SNIS) as a natural cryptographic extension of concepts like non-interactive simulation and non-interactive correlation distillation in theoretical computer science and information theory. In SNIS, the parties apply local reduction functions to their samples to produce samples of another distribution. This work studies the decidability problem of whether samples from the noise (X,Y)(X,Y) can securely and non-interactively simulate BSS samples. As is standard in analyzing non-interactive simulations, our work relies on Fourier-analytic techniques to approach this decidability problem. Our work begins by algebraizing the simulation-based security definition of SNIS. Using this algebraized definition of security, we analyze the properties of the Fourier spectrum of the reduction functions. Given (X,Y)(X,Y) and BSS with noise parameter ϵ\epsilon, the objective is to distinguish between the following two cases. (A) Does there exist a SNIS from BSS(ϵ)BSS(\epsilon) to (X,Y)(X,Y) with δ\delta-insecurity? (B) Do all SNIS from BSS(ϵ)BSS(\epsilon) to (X,Y)(X,Y) incur δ2˘7\delta\u27-insecurity, where δ2˘7>δ\delta\u27>\delta? We prove that there is a bounded computable time algorithm achieving this objective for the following cases. (1) δ=O1/n\delta=O{1/n} and δ2˘7=\delta\u27= positive constant, and (2) δ=\delta= positive constant, and δ2˘7=\delta\u27= another (larger) positive constant. We also prove that δ=0\delta=0 is achievable only when (X,Y)(X,Y) is another BSS, where (X,Y)(X,Y) is an arbitrary distribution over {1,1}×{1,1}\{-1,1\}\times\{-1,1\}. Furthermore, given (X,Y)(X,Y), we provide a sufficient test determining if simulating BSS samples incurs a constant-insecurity, irrespective of the number of samples of (X,Y)(X,Y). Handling the security of the reductions in Fourier analysis presents unique challenges because the interaction of these analytical techniques with security is unexplored. Our technical approach diverges significantly from existing approaches to the decidability problem of (insecure) non-interactive reductions to develop analysis pathways that preserve security. Consequently, our work shows a new concentration of the Fourier spectrum of secure reduction functions, unlike their insecure counterparts. We show that nearly the entire weight of secure reduction functions\u27 spectrum is concentrated on the lower-degree components. The authors believe that examining existing analytical techniques through the facet of security and developing new analysis methodologies respecting security is of independent and broader interest

    Secure Non-interactive Simulation: Feasibility & Rate

    Get PDF
    A natural solution to increase the efficiency of secure computation will be to non-interactively and securely transform diverse inexpensive-to-generate correlated randomness, like, joint samples from noise sources, into correlations useful for the secure computation while incurring low computational overhead. Motivated by this general application for secure computation, our work introduces the notion of \textit{secure non-interactive simulation} (SNIS). Parties receive samples of correlated randomness, and they, without any interaction, securely convert them into samples from another correlated randomness. SNIS is an extension of \textit{non-interactive simulation of joint distributions} (NIS), and \textit{non-interactive correlation distillation} (NICD) to the cryptographic context. It is a non-interactive version of \textit{one-way secure computation} (OWSC). Our work presents a simulation-based security definition for SNIS and initiates the study of the feasibility and efficiency of SNIS. We also study SNIS among fundamental correlated randomnesses like random samples from the binary symmetric and binary erasure channels, represented by BSS and BES, respectively. The impossibility of realizing a BES sample from BSS samples in NIS and OWSC extends to SNIS. Additionally, we prove that a SNIS of BSS sample from BES samples is impossible, which remains an open problem in NIS and OWSC. Next, we prove that a SNIS of a BES(ε2˘7)(\varepsilon\u27) sample (a BES with noise characteristic ε2˘7\varepsilon\u27) from BES(ε)(\varepsilon) is feasible if and only if (1ε2˘7)=(1ε)k(1-\varepsilon\u27)=(1-\varepsilon)^k, for some kNk\in N. In this context, we prove that all SNIS constructions must be linear. Furthermore, if (1ε2˘7)=(1ε)k(1-\varepsilon\u27) = (1-\varepsilon)^k, then the rate of simulating multiple independent BES(ε2˘7)(\varepsilon\u27) samples is at most 1/k1/k, which is also achievable using (block) linear constructions. Finally, we show that a SNIS of a BSS(ε2˘7)(\varepsilon\u27) sample from BSS(ε)(\varepsilon) samples is feasible if and only if (12ε2˘7)=(12ε)k(1-2\varepsilon\u27)=(1-2\varepsilon)^k, for some kNk\in N. Interestingly, there are linear as well as non-linear SNIS constructions. When (12ε2˘7)=(12ε)k(1-2\varepsilon\u27)=(1-2\varepsilon)^k, we prove that the rate of a \textit{perfectly secure} SNIS is at most 1/k1/k, which is achievable using linear and non-linear constructions. Our results leave open the fascinating problem of determining the rate of \textit{statistically secure} SNIS among BSS samples. Our technical approach algebraizes the definition of SNIS and proceeds via Fourier analysis. Our work develops general analysis methodologies for Boolean functions, explicitly incorporating cryptographic security constraints. Our work also proves strong forms of \textit{statistical-to-perfect security} transformations: one can error-correct a statistically secure SNIS to make it perfectly secure. We show a connection of our research with \textit{homogeneous Boolean functions} and \textit{distance-invariant codes}, which may be of independent interest
    corecore