109 research outputs found

    Time-Lock Puzzles from Randomized Encodings

    Get PDF
    Time-lock puzzles are a mechanism for sending messages "to the future". A sender can quickly generate a puzzle with a solution s that remains hidden until a moderately large amount of time t has elapsed. The solution s should be hidden from any adversary that runs in time significantly less than t, including resourceful parallel adversaries with polynomially many processors. While the notion of time-lock puzzles has been around for 22 years, there has only been a single candidate proposed. Fifteen years ago, Rivest, Shamir and Wagner suggested a beautiful candidate time-lock puzzle based on the assumption that exponentiation modulo an RSA integer is an "inherently sequential" computation. We show that various flavors of randomized encodings give rise to time-lock puzzles of varying strengths, whose security can be shown assuming the mere existence of non-parallelizing languages, which are languages that require circuits of depth at least t to decide, in the worst-case. The existence of such languages is necessary for the existence of time-lock puzzles. We instantiate the construction with different randomized encodings from the literature, where increasingly better efficiency is obtained based on increasingly stronger cryptographic assumptions, ranging from one-way functions to indistinguishability obfuscation. We also observe that time-lock puzzles imply one-way functions, and thus the reliance on some cryptographic assumption is necessary. Finally, generalizing the above, we construct other types of puzzles such as proofs of work from randomized encodings and a suitable worst-case hardness assumption (that is necessary for such puzzles to exist)

    Exponential Correlated Randomness Is Necessary in Communication-Optimal Perfectly Secure Two-Party Computation

    Get PDF
    Secure two-party computation is a cryptographic technique that enables two parties to compute a function jointly while keeping each input secret. It is known that most functions cannot be realized by information-theoretically secure two-party computation, but any function can be realized in the correlated randomness (CR) model, where a trusted dealer distributes input-independent CR to the parties beforehand. In the CR model, three kinds of complexities are mainly considered; the size of CR, the number of rounds, and the communication complexity. Ishai et al. (TCC 2013) showed that any function can be securely computed with optimal online communication cost, i.e., the number of rounds is one round and the communication complexity is the same as the input length, at the price of exponentially large CR. In this paper, we prove that exponentially large CR is necessary to achieve perfect security and online optimality for a general function and that the protocol by Ishai et al. is asymptotically optimal in terms of the size of CR. Furthermore, we also prove that exponentially large CR is still necessary even when we allow multiple rounds while keeping the optimality of communication complexity

    Cross&Clean: Amortized Garbled Circuits with Constant Overhead

    Get PDF
    Garbled circuits (GC) are one of the main tools for secure two-party computation. One of the most promising techniques for efficiently achieving active-security in the context of GCs is the so called \emph{cut-and-choose} approach, which in the last few years has received many refinements in terms of the number of garbled circuits which need to be constructed, exchanged and evaluated. In this paper we ask a simple question, namely \emph{how many garbled circuits are needed to achieve active security?} and we propose a novel protocol which achieves active security while using only a constant number of garbled circuits per evaluation in the amortized setting

    On the Download Rate of Homomorphic Secret Sharing

    Get PDF
    A homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports evaluating functions on shared secrets by means of a local mapping from input shares to output shares. We initiate the study of the download rate of HSS, namely, the achievable ratio between the length of the output shares and the output length when amortized over \ell function evaluations. We obtain the following results. * In the case of linear information-theoretic HSS schemes for degree-dd multivariate polynomials, we characterize the optimal download rate in terms of the optimal minimal distance of a linear code with related parameters. We further show that for sufficiently large \ell (polynomial in all problem parameters), the optimal rate can be realized using Shamir's scheme, even with secrets over F2\mathbb{F}_2. * We present a general rate-amplification technique for HSS that improves the download rate at the cost of requiring more shares. As a corollary, we get high-rate variants of computationally secure HSS schemes and efficient private information retrieval protocols from the literature. * We show that, in some cases, one can beat the best download rate of linear HSS by allowing nonlinear output reconstruction and 2Ω()2^{-\Omega(\ell)} error probability

    Tight Bounds on the Randomness Complexity of Secure Multiparty Computation

    Get PDF
    We revisit the question of minimizing the randomness complexity of protocols for secure multiparty computation (MPC) in the setting of perfect information-theoretic security. Kushilevitz and Mansour (SIAM J. Discret. Math., 1997) studied the case of nn-party semi-honest MPC for the XOR function with security threshold t<nt<n, showing that O(t2log(n/t))O(t^2\log(n/t)) random bits are sufficient and Ω(t)\Omega(t) random bits are necessary. Their positive result was obtained via a non-explicit protocol, whose existence was proved using the probabilistic method. We essentially close the question by proving an Ω(t2)\Omega(t^2) lower bound on the randomness complexity of XOR, matching the previous upper bound up to a logarithmic factor (or constant factor when t=Ω(n)t=\Omega(n)). We also obtain an explicit protocol that uses O(t2log2n)O(t^2\cdot\log^2n) random bits, matching our lower bound up to a polylogarithmic factor. We extend these results from XOR to general symmetric Boolean functions and to addition over a finite Abelian group, showing how to amortize the randomness complexity over multiple additions. Finally, combining our techniques with recent randomness-efficient constructions of private circuits, we obtain an explicit protocol for evaluating a general circuit CC using only O(t2logC)O(t^2\cdot\log |C|) random bits, by employing additional ``helper parties\u27\u27 who do not contribute any inputs. This upper bound too matches our lower bound up to a logarithmic factor

    On the Power of Amortization in Secret Sharing: dd-Uniform Secret Sharing and CDS with Constant Information Rate

    Get PDF
    Consider the following secret-sharing problem. Your goal is to distribute a long file ss between nn servers such that (d1)(d-1)-subsets cannot recover the file, (d+1)(d+1)-subsets can recover the file, and dd-subsets should be able to recover ss if and only if they appear in some predefined list LL. How small can the information ratio (i.e., the number of bits stored on a server per each bit of the secret) be? We initiate the study of such dd-uniform access structures, and view them as a useful scaled-down version of general access structures. Our main result shows that, for constant dd, any dd-uniform access structure admits a secret sharing scheme with a *constant* asymptotic information ratio of cdc_d that does not grow with the number of servers nn. This result is based on a new construction of dd-party Conditional Disclosure of Secrets (Gertner et al., JCSS \u2700) for arbitrary predicates over nn-size domain in which each party communicates at most four bits per secret bit. In both settings, previous results achieved non-constant information ratio which grows asymptotically with nn even for the simpler (and widely studied) special case of d=2d=2. Moreover, our results provide a unique example for a natural class of access structures FF that can be realized with information rate smaller than its bit-representation length logF\log |F| (i.e., Ω(dlogn)\Omega( d \log n) for dd-uniform access structures) showing that amortization can beat the representation size barrier. Our main result applies to exponentially long secrets, and so it should be mainly viewed as a barrier against amortizable lower-bound techniques. We also show that in some natural simple cases (e.g., low-degree predicates), amortization kicks in even for quasi-polynomially long secrets. Finally, we prove some limited lower-bounds, point out some limitations of existing lower-bound techniques, and describe some applications to the setting of private simultaneous messages

    Time-Lock Puzzles from Randomized Encodings

    Get PDF
    Time-lock puzzles, introduced by May, Rivest, Shamir and Wagner, is a mechanism for sending messages ``to the future\u27\u27. A sender can quickly generate a puzzle with a solution ss that remains hidden until a moderately large amount of time tt has elapsed. The solution ss should be hidden from any adversary that runs in time significantly less than tt, including resourceful parallel adversaries with polynomially many processors. While the notion of time-lock puzzles has been around for 22 years, there has only been a *single* candidate proposed. Fifteen years ago, Rivest, Shamir and Wagner suggested a beautiful candidate time-lock puzzle based on the assumption that exponentiation modulo an RSA integer is an ``inherently sequential\u27\u27 computation. We show that various flavors of {\em randomized encodings} give rise to time-lock puzzles of varying strengths, whose security can be shown assuming *the existence* of non-parallelizing languages, which are languages that require circuits of depth at least tt to decide, in the worst-case. The existence of such languages is necessary for the existence of time-lock puzzles. We instantiate the construction with different randomized encodings from the literature, where increasingly better efficiency is obtained based on increasingly stronger cryptographic assumptions, ranging from one-way functions to indistinguishability obfuscation. We also observe that time-lock puzzles imply one-way functions, and thus the reliance on some cryptographic assumption is necessary. Finally, generalizing the above, we construct other types of puzzles such as *proofs of work* from randomized encodings and a suitable worst-case hardness assumption (that is necessary for such puzzles to exist)

    SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning

    Get PDF
    Performing machine learning (ML) computation on private data while maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an emergent field of research. Recently, PPML has seen a visible shift towards the adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy computation that it entails. In the SOC paradigm, computation is outsourced to a set of powerful and specially equipped servers that provide service on a pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for a range of ML algorithms in SOC setting, that guarantees output delivery to the users irrespective of any adversarial behaviour. Robustness, a highly desirable feature, evokes user participation without the fear of denial of service. At the heart of our framework lies a highly-efficient, maliciously-secure, three-party computation (3PC) over rings that provides guaranteed output delivery (GOD) in the honest-majority setting. To the best of our knowledge, SWIFT is the first robust and efficient PPML framework in the 3PC setting. SWIFT is as fast as (and is strictly better in some cases than) the best-known 3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20) and twice faster than the best-known robust 4PC framework FLASH (Byali et al. PETS'20). We demonstrate our framework's practical relevance by benchmarking popular ML algorithms such as Logistic Regression and deep Neural Networks such as VGG16 and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results testify to our claims that we provide improved security guarantee while incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.Comment: This article is the full and extended version of an article to appear in USENIX Security 202
    corecore