402 research outputs found
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Secure multi-party computation (MPC) allows parties to perform computations
on data while keeping that data private. This capability has great potential
for machine-learning applications: it facilitates training of machine-learning
models on private data sets owned by different parties, evaluation of one
party's private model using another party's private data, etc. Although a range
of studies implement machine-learning models via secure MPC, such
implementations are not yet mainstream. Adoption of secure MPC is hampered by
the absence of flexible software frameworks that "speak the language" of
machine-learning researchers and engineers. To foster adoption of secure MPC in
machine learning, we present CrypTen: a software framework that exposes popular
secure MPC primitives via abstractions that are common in modern
machine-learning frameworks, such as tensor computations, automatic
differentiation, and modular neural networks. This paper describes the design
of CrypTen and measure its performance on state-of-the-art models for text
classification, speech recognition, and image classification. Our benchmarks
show that CrypTen's GPU support and high-performance communication between (an
arbitrary number of) parties allows it to perform efficient private evaluation
of modern machine-learning models under a semi-honest threat model. For
example, two parties using CrypTen can securely predict phonemes in speech
recordings using Wav2Letter faster than real-time. We hope that CrypTen will
spur adoption of secure MPC in the machine-learning community
SIMC 2.0: Improved Secure ML Inference Against Malicious Clients
In this paper, we study the problem of secure ML inference against a
malicious client and a semi-trusted server such that the client only learns the
inference output while the server learns nothing. This problem is first
formulated by Lehmkuhl \textit{et al.} with a solution (MUSE, Usenix
Security'21), whose performance is then substantially improved by Chandran et
al.'s work (SIMC, USENIX Security'22). However, there still exists a nontrivial
gap in these efforts towards practicality, giving the challenges of overhead
reduction and secure inference acceleration in an all-round way.
We propose SIMC 2.0, which complies with the underlying structure of SIMC,
but significantly optimizes both the linear and non-linear layers of the model.
Specifically, (1) we design a new coding method for homomorphic parallel
computation between matrices and vectors. It is custom-built through the
insight into the complementarity between cryptographic primitives in SIMC. As a
result, it can minimize the number of rotation operations incurred in the
calculation process, which is very computationally expensive compared to other
homomorphic operations e.g., addition, multiplication). (2) We reduce the size
of the garbled circuit (GC) (used to calculate nonlinear activation functions,
e.g., ReLU) in SIMC by about two thirds. Then, we design an alternative
lightweight protocol to perform tasks that are originally allocated to the
expensive GCs. Compared with SIMC, our experiments show that SIMC 2.0 achieves
a significant speedup by up to for linear layer computation, and
at least reduction of both the computation and communication
overheads in the implementation of non-linear layers under different data
dimensions. Meanwhile, SIMC 2.0 demonstrates an encouraging runtime boost by
over SIMC on different state-of-the-art ML models
Convolutions in Overdrive: Maliciously Secure Convolutions for MPC
Machine learning (ML) has seen a strong rise in popularity in recent years and has become an essential tool for research and industrial applications. Given the large amount of high quality data needed and the often sensitive nature of ML data, privacy-preserving collaborative ML is of increasing importance. In this paper, we introduce new actively secure multiparty computation (MPC) protocols which are specially optimized for privacy-preserving machine learning applications. We concentrate on the optimization of (tensor) convolutions which belong to the most commonly used components in ML architectures, especially in convolutional neural networks but also in recurrent neural networks or transformers, and therefore have a major impact on the overall performance. Our approach is based on a generalized form of structured randomness that speeds up convolutions in a fast online phase. The structured randomness is generated with homomorphic encryption using adapted and newly constructed packing methods for convolutions, which might be of independent interest. Overall our protocols extend the state-of-the-art Overdrive family of protocols (Keller et al., EUROCRYPT 2018). We implemented our protocols on-top of MP-SPDZ (Keller, CCS 2020) resulting in a full-featured implementation with support for faster convolutions. Our evaluation shows that our protocols outperform state-of-the-art actively secure MPC protocols on ML tasks like evaluating ResNet50 by a factor of 3 or more. Benchmarks for depthwise convolutions show order-of-magnitude speed-ups compared to existing approaches
Efficient Privacy-Preserving Machine Learning with Lightweight Trusted Hardware
In this paper, we propose a new secure machine learning inference platform
assisted by a small dedicated security processor, which will be easier to
protect and deploy compared to today's TEEs integrated into high-performance
processors. Our platform provides three main advantages over the
state-of-the-art:
(i) We achieve significant performance improvements compared to
state-of-the-art distributed Privacy-Preserving Machine Learning (PPML)
protocols, with only a small security processor that is comparable to a
discrete security chip such as the Trusted Platform Module (TPM) or on-chip
security subsystems in SoCs similar to the Apple enclave processor. In the
semi-honest setting with WAN/GPU, our scheme is 4X-63X faster than Falcon
(PoPETs'21) and AriaNN (PoPETs'22) and 3.8X-12X more communication efficient.
We achieve even higher performance improvements in the malicious setting.
(ii) Our platform guarantees security with abort against malicious
adversaries under honest majority assumption.
(iii) Our technique is not limited by the size of secure memory in a TEE and
can support high-capacity modern neural networks like ResNet18 and Transformer.
While previous work investigated the use of high-performance TEEs in PPML,
this work represents the first to show that even tiny secure hardware with
really limited performance can be leveraged to significantly speed-up
distributed PPML protocols if the protocol can be carefully designed for
lightweight trusted hardware.Comment: IEEE S&P'24 submitte
- …