1,338 research outputs found

    Faster Secure Two-Party Computation in the Single-Execution Setting

    Get PDF
    We propose a new protocol for two-party computation, secure against malicious adversaries, that is significantly faster than prior work in the single-execution setting (i.e., non-amortized and with no pre-processing). In particular, for computational security parameter κ\kappa and statistical security parameter ρ\rho, our protocol uses only ρ\rho garbled circuits and O(κ)O(\kappa) public-key operations, whereas previous work with the same number of garbled circuits required either O(ρn+κ)O(\rho n + \kappa) public-key operations (where n is the input/output length) or a second execution of a secure-computation sub-protocol. Our protocol can be based on the decisional Diffie-Hellman assumption in the standard model. We implement our protocol to evaluate its performance. With ρ=40\rho = 40, our implementation securely computes an AES evaluation in 65 ms over a local-area network using a single thread without any pre-computation, 22x faster than the best prior work in the non-amortized setting. The relative performance of our protocol is even better for functions with larger input/output lengths

    Oblivious Pseudo-Random Functions via Garbled Circuits

    Get PDF
    An Oblivious Pseudo-Random Function (OPRF) is a protocol that allows two parties – a server and a user – to jointly compute the output of a Pseudo-Random Function (PRF). The server holds the key for the PRF and the user holds an input on which the function shall be evaluated. The user learns the correct output while the inputs of both parties remain private. If the server can additionally prove to the user that several executions of the protocol were performed with the same key, we call the OPRF verifiable. One way to construct an OPRF protocol is by using generic tools from multi-party computation, like Yao’s seminal garbled circuits protocol. Garbled circuits allow two parties to evaluate any boolean circuit, while the input that each party provides to the circuit remains hidden from the respective other party. An approach to realizing OPRFs based on garbled circuits was e.g. mentioned by Pinkas et al. (ASIACRYPT ’09). But OPRFs are used as a building block in various cryptographic protocols. This frequent usage in conjunction with other building blocks calls for a security analysis that takes composition, i.e., the usage in a bigger context into account. In this work, we give the first construction of a garbled-circuit-based OPRF that is secure in the universal composability model by Canetti (FOCS ’01). This means the security of our protocol holds even if the protocol is used in arbitrary execution environments, even under parallel composition. We achieve a passively secure protocol that relies on authenticated channels, the random oracle model, and the security of oblivious transfer. We use a technique from Albrecht et al. (PKC ’21) to extend the protocol to a verifiable OPRF by employing a commitment scheme. The two parties compute a circuit that only outputs a PRF value if a commitment opens to the right server-key. Further, we implemented our construction and compared the concrete efficiency with two other OPRFs. We found that our construction is over a hundred times faster than a recent lattice-based construction by Albrecht et al. (PKC ’21), but not as efficient as the state-of-the-art protocol from Jarecki et al. (EUROCRYPT ’18), based on the hardness of the discrete logarithm problem in certain groups. Our efficiency-benchmark results imply that – under certain circumstances – generic techniques as garbled circuits can achieve substantially better performance in practice than some protocols specifically designed for the problem. Büscher et al. (ACNS ’20) showed that garbled circuits are secure in the presence of adversaries using quantum computers. This fact combined with our results indicates that garbled-circuit-based OPRFs are a promising way towards efficient OPRFs that are secure against those quantum adversaries

    Reuse It Or Lose It: More Efficient Secure Computation Through Reuse of Encrypted Values

    Full text link
    Two-party secure function evaluation (SFE) has become significantly more feasible, even on resource-constrained devices, because of advances in server-aided computation systems. However, there are still bottlenecks, particularly in the input validation stage of a computation. Moreover, SFE research has not yet devoted sufficient attention to the important problem of retaining state after a computation has been performed so that expensive processing does not have to be repeated if a similar computation is done again. This paper presents PartialGC, an SFE system that allows the reuse of encrypted values generated during a garbled-circuit computation. We show that using PartialGC can reduce computation time by as much as 96% and bandwidth by as much as 98% in comparison with previous outsourcing schemes for secure computation. We demonstrate the feasibility of our approach with two sets of experiments, one in which the garbled circuit is evaluated on a mobile device and one in which it is evaluated on a server. We also use PartialGC to build a privacy-preserving "friend finder" application for Android. The reuse of previous inputs to allow stateful evaluation represents a new way of looking at SFE and further reduces computational barriers.Comment: 20 pages, shorter conference version published in Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Pages 582-596, ACM New York, NY, US

    Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications

    Get PDF
    We present Chameleon, a novel hybrid (mixed-protocol) framework for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs. Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing. In particular, the framework performs linear operations in the ring Z2l\mathbb{Z}_{2^l} using additively secret shared values and nonlinear operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson protocol. Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase. Almost all of the heavy cryptographic operations are precomputed in an offline phase which substantially reduces the communication overhead. Chameleon is both scalable and significantly more efficient than the ABY framework (NDSS'15) it is based on. Our framework supports signed fixed-point numbers. In particular, Chameleon's vector dot product of signed fixed-point numbers improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer convolutional deep neural network shows 133x and 4.2x faster executions than Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively

    ARM2GC: Succinct Garbled Processor for Secure Computation

    Get PDF
    We present ARM2GC, a novel secure computation framework based on Yao's Garbled Circuit (GC) protocol and the ARM processor. It allows users to develop privacy-preserving applications using standard high-level programming languages (e.g., C) and compile them using off-the-shelf ARM compilers (e.g., gcc-arm). The main enabler of this framework is the introduction of SkipGate, an algorithm that dynamically omits the communication and encryption cost of the gates whose outputs are independent of the private data. SkipGate greatly enhances the performance of ARM2GC by omitting costs of the gates associated with the instructions of the compiled binary, which is known by both parties involved in the computation. Our evaluation on benchmark functions demonstrates that ARM2GC not only outperforms the current GC frameworks that support high-level languages, it also achieves efficiency comparable to the best prior solutions based on hardware description languages. Moreover, in contrast to previous high-level frameworks with domain-specific languages and customized compilers, ARM2GC relies on standard ARM compiler which is rigorously verified and supports programs written in the standard syntax.Comment: 13 page

    XONN: XNOR-based Oblivious Deep Neural Network Inference

    Get PDF
    Advancements in deep learning enable cloud servers to provide inference-as-a-service for clients. In this scenario, clients send their raw data to the server to run the deep learning model and send back the results. One standing challenge in this setting is to ensure the privacy of the clients' sensitive data. Oblivious inference is the task of running the neural network on the client's input without disclosing the input or the result to the server. This paper introduces XONN, a novel end-to-end framework based on Yao's Garbled Circuits (GC) protocol, that provides a paradigm shift in the conceptual and practical realization of oblivious inference. In XONN, the costly matrix-multiplication operations of the deep learning model are replaced with XNOR operations that are essentially free in GC. We further provide a novel algorithm that customizes the neural network such that the runtime of the GC protocol is minimized without sacrificing the inference accuracy. We design a user-friendly high-level API for XONN, allowing expression of the deep learning model architecture in an unprecedented level of abstraction. Extensive proof-of-concept evaluation on various neural network architectures demonstrates that XONN outperforms prior art such as Gazelle (USENIX Security'18) by up to 7x, MiniONN (ACM CCS'17) by 93x, and SecureML (IEEE S&P'17) by 37x. State-of-the-art frameworks require one round of interaction between the client and the server for each layer of the neural network, whereas, XONN requires a constant round of interactions for any number of layers in the model. XONN is first to perform oblivious inference on Fitnet architectures with up to 21 layers, suggesting a new level of scalability compared with state-of-the-art. Moreover, we evaluate XONN on four datasets to perform privacy-preserving medical diagnosis.Comment: To appear in USENIX Security 201
    corecore