7 research outputs found

    Secure Computation with Preprocessing via Function Secret Sharing

    Get PDF
    We propose a simple and powerful new approach for secure computation with input-independent preprocessing, building on the general tool of function secret sharing (FSS) and its efficient instantiations. Using this approach, we can make efficient use of correlated randomness to compute any type of gate, as long as a function class naturally corresponding to this gate admits an efficient FSS scheme. Our approach can be viewed as a generalization of the TinyTable protocol of Damgard et al. (Crypto 2017), where our generalized variant uses FSS to achieve exponential efficiency improvement for useful types of gates. By instantiating this general approach with efficient PRG-based FSS schemes of Boyle et al. (Eurocrypt 2015, CCS 2016), we can implement useful nonlinear gates for equality tests, integer comparison, bit-decomposition and more with optimal online communication and with a relatively small amount of correlated randomness. We also provide a unified and simplified view of several existing protocols in the preprocessing model via the FSS framework. Our positive results provide a useful tool for secure computation tasks that involve secure integer comparisons or conversions between arithmetic and binary representations. These arise in the contexts of approximating real-valued functions, machine-learning classification, and more. Finally, we study the necessity of the FSS machinery that we employ, in the simple context of secure string equality testing. First, we show that any online-optimal secure equality protocol implies an FSS scheme for point functions, which in turn implies one-way functions. Then, we show that information-theoretic secure equality protocols with relaxed optimality requirements would follow from the existence of big families of matching vectors. This suggests that proving strong lower bounds on the efficiency of such protocols would be difficult

    ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing

    Get PDF
    We propose ARIANN, a low-interaction framework to perform private training and inference of standard deep neural networks on sensitive data. This framework implements semi-honest 2-party computation and leverages function secret sharing, a recent cryptographic protocol that only uses lightweight primitives to achieve an efficient online phase with a single message of the size of the inputs, for operations like comparison and multiplication which are building blocks of neural networks. Built on top of PyTorch, it offers a wide range of functions including ReLU, MaxPool and BatchNorm, and allows to use models like AlexNet or ResNet18. We report experimental results for inference and training over distant servers. Last, we propose an extension to support n-party private federated learning

    Lightweight Techniques for Private Heavy Hitters

    Get PDF
    This paper presents a new protocol for solving the private heavy-hitters problem. In this problem, there are many clients and a small set of data-collection servers. Each client holds a private bitstring. The servers want to recover the set of all popular strings, without learning anything else about any client's string. A web-browser vendor, for instance, can use our protocol to figure out which homepages are popular, without learning any user's homepage. We also consider the simpler private subset-histogram problem, in which the servers want to count how many clients hold strings in a particular set without revealing this set to the clients. Our protocols use two data-collection servers and, in a protocol run, each client send sends only a single message to the servers. Our protocols protect client privacy against arbitrary misbehavior by one of the servers and our approach requires no public-key cryptography (except for secure channels), nor general-purpose multiparty computation. Instead, we rely on incremental distributed point functions, a new cryptographic tool that allows a client to succinctly secret-share the labels on the nodes of an exponentially large binary tree, provided that the tree has a single non-zero path. Along the way, we develop new general tools for providing malicious security in applications of distributed point functions. In an experimental evaluation with two servers on opposite sides of the U.S., the servers can find the 200 most popular strings among a set of 400,000 client-held 256-bit strings in 54 minutes. Our protocols are highly parallelizable. We estimate that with 20 physical machines per logical server, our protocols could compute heavy hitters over ten million clients in just over one hour of computation.Comment: To appear in IEEE Security & Privacy 202

    Curl: Private LLMs through Wavelet-Encoded Look-Up Tables

    Get PDF
    Recent advancements in transformers have revolutionized machine learning, forming the core of Large language models (LLMs). However, integrating these systems into everyday applications raises privacy concerns as client queries are exposed to model owners. Secure multiparty computation (MPC) allows parties to evaluate machine learning applications while keeping sensitive user inputs and proprietary models private. Due to inherent MPC costs, recent works introduce model-specific optimizations that hinder widespread adoption by machine learning researchers. CrypTen (NeurIPS\u2721) aimed to solve this problem by exposing MPC primitives via common machine learning abstractions such as tensors and modular neural networks. Unfortunately, CrypTen and many other MPC frameworks rely on polynomial approximations of the non-linear functions, resulting in high errors and communication complexity. This paper introduces Curl, an easy-to-use MPC framework that evaluates non-linear functions as lookup tables, resulting in better approximations and significant round and communication reduction. Curl exposes a similar programming model as CrypTen and is highly parallelizable through tensors. At its core, Curl relies on discrete wavelet transformations to reduce the lookup table size without sacrificing accuracy, which results in up to 19×19\times round and communication reduction compared to CrypTen for non-linear functions such as logarithms and reciprocals. We evaluate Curl on a diverse set of LLMs, including BERT, GPT-2, and GPT Neo, and compare against state-of-the-art related works such as Iron (NeurIPS\u2722) and Bolt (S&P\u2724) achieving at least 1.9×1.9\times less communication and latency. Finally, we resolve a long-standing debate regarding the security of widely used probabilistic truncation protocols by proving their security in the stand-alone model. This is of independent interest as many related works rely on this truncation style

    Function Secret Sharing for Mixed-Mode and Fixed-Point Secure Computation

    Get PDF
    Boyle et al. (TCC 2019) proposed a new approach for secure computation in the preprocessing model building on function secret sharing (FSS), where a gate gg is evaluated using an FSS scheme for the related offset family gr(x)=g(x+r)g_r(x)=g(x+r). They further presented efficient FSS schemes based on any pseudorandom generator (PRG) for the offset families of several useful gates gg that arise in mixed-mode\u27\u27 secure computation. These include gates for zero test, integer comparison, ReLU, and spline functions. The FSS-based approach offers significant savings in online communication and round complexity compared to alternative techniques based on garbled circuits or secret sharing. In this work, we improve and extend the previous results of Boyle et al. by making the following three kinds of contributions: - Improved Key Size: The preprocessing and storage costs of the FSS-based approach directly depend on the FSS key size. We improve the key size of previous constructions through two steps. First, we obtain roughly 4x reduction in key size for Distributed Comparison Function (DCF), i.e., FSS for the family of functions f^<_a_,_b(x) that output bb if x<ax < a and 00 otherwise. DCF serves as a central building block in the constructions of Boyle et al. Second, we improve the number of DCF instances required for realizing useful gates gg. For example, whereas previous FSS schemes for ReLU and mm-piece spline required 2 and 2m2m DCF instances, respectively, ours require only a single instance of DCF in both cases. This improves the FSS key size by 6-22x for commonly used gates such as ReLU and sigmoid. - New Gates: We present the first PRG-based FSS schemes for arithmetic and logical shift gates, as well as for bit-decomposition where both the input and outputs are shared over ZNZ_N for N=2nN = 2^n. These gates are crucial for many applications related to fixed-point arithmetic and machine learning. - A Barrier: The above results enable a 2-round PRG-based secure evaluation of multiply-then-truncate,\u27\u27 a central operation in fixed-point arithmetic, by sequentially invoking FSS schemes for multiplication and shift. We identify a barrier to obtaining a 1-round implementation via a single FSS scheme, showing that this would require settling a major open problem in the area of FSS: namely, a PRG-based FSS for the class of bit-conjunction functions
    corecore