417 research outputs found
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
Performing machine learning (ML) computation on private data while
maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an
emergent field of research. Recently, PPML has seen a visible shift towards the
adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy
computation that it entails. In the SOC paradigm, computation is outsourced to
a set of powerful and specially equipped servers that provide service on a
pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for
a range of ML algorithms in SOC setting, that guarantees output delivery to the
users irrespective of any adversarial behaviour. Robustness, a highly desirable
feature, evokes user participation without the fear of denial of service.
At the heart of our framework lies a highly-efficient, maliciously-secure,
three-party computation (3PC) over rings that provides guaranteed output
delivery (GOD) in the honest-majority setting. To the best of our knowledge,
SWIFT is the first robust and efficient PPML framework in the 3PC setting.
SWIFT is as fast as (and is strictly better in some cases than) the best-known
3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We
extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as
fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20)
and twice faster than the best-known robust 4PC framework FLASH (Byali et al.
PETS'20).
We demonstrate our framework's practical relevance by benchmarking popular ML
algorithms such as Logistic Regression and deep Neural Networks such as VGG16
and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results
testify to our claims that we provide improved security guarantee while
incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.Comment: This article is the full and extended version of an article to appear
in USENIX Security 202
BLAZE: Blazing Fast Privacy-Preserving Machine Learning
Machine learning tools have illustrated their potential in many significant
sectors such as healthcare and finance, to aide in deriving useful inferences.
The sensitive and confidential nature of the data, in such sectors, raise
natural concerns for the privacy of data. This motivated the area of
Privacy-preserving Machine Learning (PPML) where privacy of the data is
guaranteed. Typically, ML techniques require large computing power, which leads
clients with limited infrastructure to rely on the method of Secure Outsourced
Computation (SOC). In SOC setting, the computation is outsourced to a set of
specialized and powerful cloud servers and the service is availed on a
pay-per-use basis. In this work, we explore PPML techniques in the SOC setting
for widely used ML algorithms-- Linear Regression, Logistic Regression, and
Neural Networks.
We propose BLAZE, a blazing fast PPML framework in the three server setting
tolerating one malicious corruption over a ring (\Z{\ell}). BLAZE achieves the
stronger security guarantee of fairness (all honest servers get the output
whenever the corrupt server obtains the same). Leveraging an input-independent
preprocessing phase, BLAZE has a fast input-dependent online phase relying on
efficient PPML primitives such as: (i) A dot product protocol for which the
communication in the online phase is independent of the vector size, the first
of its kind in the three server setting; (ii) A method for truncation that
shuns evaluating expensive circuit for Ripple Carry Adders (RCA) and achieves a
constant round complexity. This improves over the truncation method of ABY3
(Mohassel et al., CCS 2018) that uses RCA and consumes a round complexity that
is of the order of the depth of RCA.
An extensive benchmarking of BLAZE for the aforementioned ML algorithms over
a 64-bit ring in both WAN and LAN settings shows massive improvements over
ABY3.Comment: The Network and Distributed System Security Symposium (NDSS) 202
Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning
Machine learning has started to be deployed in fields such as healthcare and
finance, which propelled the need for and growth of privacy-preserving machine
learning (PPML). We propose an actively secure four-party protocol (4PC), and a
framework for PPML, showcasing its applications on four of the most
widely-known machine learning algorithms -- Linear Regression, Logistic
Regression, Neural Networks, and Convolutional Neural Networks. Our 4PC
protocol tolerating at most one malicious corruption is practically efficient
as compared to the existing works. We use the protocol to build an efficient
mixed-world framework (Trident) to switch between the Arithmetic, Boolean, and
Garbled worlds. Our framework operates in the offline-online paradigm over
rings and is instantiated in an outsourced setting for machine learning. Also,
we propose conversions especially relevant to privacy-preserving machine
learning. The highlights of our framework include using a minimal number of
expensive circuits overall as compared to ABY3. This can be seen in our
technique for truncation, which does not affect the online cost of
multiplication and removes the need for any circuits in the offline phase. Our
B2A conversion has an improvement of in rounds and
in the communication complexity. The practicality of our
framework is argued through improvements in the benchmarking of the
aforementioned algorithms when compared with ABY3. All the protocols are
implemented over a 64-bit ring in both LAN and WAN settings. Our improvements
go up to for the training phase and
for the prediction phase when observed over LAN and WAN.Comment: This work appeared at the 26th Annual Network and Distributed System
Security Symposium (NDSS) 2020. Update: An improved version of this framework
is available at arXiv:2106.0285
Fast Actively Secure OT Extension for Short Secrets
Oblivious Transfer (OT) is one of the most fundamental cryptographic primitives with wide-spread application in general secure multi-party computation (MPC) as well as in a number of tailored and special-purpose problems of interest such as private set intersection (PSI), private information retrieval (PIR), contract signing to name a few. Often the instantiations of OT require prohibitive communication and computation complexity. OT extension protocols are introduced to compute a very large number of OTs referred as extended OTs at the cost of a small number of OTs referred as seed OTs.
We present a fast OT extension protocol for small secrets in active setting. Our protocol when used to produce -out-of- OTs outperforms all the known actively secure OT extensions. Our protocol is built on the semi-honest secure extension protocol of Kolesnikov and Kumaresan of CRYPTO\u2713 (referred as KK13 protocol henceforth) which is the best known OT extension for short secrets. At the heart of our protocol lies an efficient consistency checking mechanism that relies on the linearity of Walsh-Hadamard (WH) codes. Asymptotically, our protocol adds a communication overhead of bits over KK13 protocol irrespective of the number of extended OTs, where and refer to computational and statistical security parameter respectively. Concretely, our protocol when used to generate a large enough number of OTs adds only communication overhead and runtime overhead both in LAN and WAN over KK13 extension. The runtime overheads drop below when in addition the number of inputs of the sender in the extended OTs is large enough.
As an application of our proposed extension protocol, we show that it can be used to obtain the most efficient PSI protocol secure against a malicious receiver and a semi-honest sender
HyFL: A Hybrid Framework For Private Federated Learning
Federated learning (FL) has emerged as an efficient approach for large-scale
distributed machine learning, ensuring data privacy by keeping training data on
client devices. However, recent research has highlighted vulnerabilities in FL,
including the potential disclosure of sensitive information through individual
model updates and even the aggregated global model. While much attention has
been given to clients' data privacy, limited research has addressed the issue
of global model privacy. Furthermore, local training at the client's side has
opened avenues for malicious clients to launch powerful model poisoning
attacks. Unfortunately, no existing work has provided a comprehensive solution
that tackles all these issues. Therefore, we introduce HyFL, a hybrid framework
that enables data and global model privacy while facilitating large-scale
deployments. The foundation of HyFL is a unique combination of secure
multi-party computation (MPC) techniques with hierarchical federated learning.
One notable feature of HyFL is its capability to prevent malicious clients from
executing model poisoning attacks, confining them to less destructive data
poisoning alone. We evaluate HyFL's effectiveness using an open-source
PyTorch-based FL implementation integrated with Meta's CrypTen PPML framework.
Our performance evaluation demonstrates that HyFL is a promising solution for
trustworthy large-scale FL deployment.Comment: HyFL combines private training and inference with secure aggregation
and hierarchical FL to provide end-to-end protection and to facilitate
large-scale global deploymen
FLASH: Fast and Robust Framework for Privacy-preserving Machine Learning
Privacy-preserving machine learning (PPML) via Secure Multi-party Computation (MPC) has gained momentum in the recent past. Assuming a minimal network of pair-wise private channels, we propose an efficient four-party PPML framework over rings , FLASH, the first of its kind in the regime of PPML framework, that achieves the strongest security notion of Guaranteed Output Delivery (all parties obtain the output irrespective of adversary\u27s behaviour). The state of the art ML frameworks such as ABY3 by {\em Mohassel et.al} (ACM CCS\u2718) and SecureNN by {\em Wagh et.al} (PETS\u2719) operate in the setting of parties with one malicious corruption but achieve the {\em weaker} security guarantee of {\em abort}. We demonstrate PPML with real-time efficiency, using the following custom-made tools that overcome the limitations of the aforementioned state-of-the-art-- (a) {\em dot product}, which is independent of the vector size unlike the state-of-the-art ABY3, SecureNN and ASTRA by {\em Chaudhari et.al} (ACM CCSW\u2719), all of which have linear dependence on the vector size. (b) {\em Truncation}, which is constant round and free of circuits like Ripple Carry Adder (RCA), unlike ABY3 which uses these circuits and has round complexity of the order of depth of these circuits. We then exhibit the application of our FLASH framework in the secure server-aided prediction of vital algorithms-- Linear Regression, Logistic Regression, Deep Neural Networks, and Binarized Neural Networks. We substantiate our theoretical claims through improvement in benchmarks of the aforementioned algorithms when compared with the current best framework ABY3. All the protocols are implemented over a 64-bit ring in LAN and WAN. Our experiments demonstrate that, for MNIST dataset, the improvement (in terms of throughput) ranges from to over LAN and WAN together
MPClan: Protocol Suite for Privacy-Conscious Computations
The growing volumes of data being collected and its analysis to provide better services are creating worries about digital privacy. To address privacy concerns and give practical solutions, the literature has relied on secure multiparty computation. However, recent research has mostly focused on the small-party honest-majority setting of up to four parties, noting efficiency concerns. In this work, we extend the strategies to support a larger number of participants in an honest-majority setting with efficiency at the center stage.
Cast in the preprocessing paradigm, our semi-honest protocol improves the online complexity of the decade-old state-of-the-art protocol of Damgård and Nielson (CRYPTO\u2707). In addition to having an improved online communication cost, we can shut down almost half of the parties in the online phase, thereby saving up to 50 in the system\u27s operational costs. Our maliciously secure protocol also enjoys similar benefits and requires only half of the parties, except for one-time verification, towards the end.
To showcase the practicality of the designed protocols, we benchmark popular applications such as deep neural networks, graph neural networks, genome sequence matching, and biometric matching using prototype implementations. Our improved protocols aid in bringing up to 60-80 savings in monetary cost over prior work
ASTRA: High Throughput 3PC over Rings with Application to Secure Prediction
The concrete efficiency of secure computation has been the focus of many
recent works. In this work, we present concretely-efficient protocols for
secure -party computation (3PC) over a ring of integers modulo
tolerating one corruption, both with semi-honest and malicious security. Owing
to the fact that computation over ring emulates computation over the real-world
system architectures, secure computation over ring has gained momentum of late.
Cast in the offline-online paradigm, our constructions present the most
efficient online phase in concrete terms. In the semi-honest setting, our
protocol requires communication of ring elements per multiplication gate
during the {\it online} phase, attaining a per-party cost of {\em less than one
element}. This is achieved for the first time in the regime of 3PC. In the {\it
malicious} setting, our protocol requires communication of elements per
multiplication gate during the online phase, beating the state-of-the-art
protocol by elements. Realized with both the security notions of selective
abort and fairness, the malicious protocol with fairness involves slightly more
communication than its counterpart with abort security for the output gates
{\em alone}.
We apply our techniques from PC in the regime of secure server-aided
machine-learning (ML) inference for a range of prediction functions-- linear
regression, linear SVM regression, logistic regression, and linear SVM
classification. Our setting considers a model-owner with trained model
parameters and a client with a query, with the latter willing to learn the
prediction of her query based on the model parameters of the former. The inputs
and computation are outsourced to a set of three non-colluding servers. Our
constructions catering to both semi-honest and the malicious world, invariably
perform better than the existing constructions.Comment: This article is the full and extended version of an article appeared
in ACM CCSW 201
Recommended from our members
Intra-arterial Onyx Embolization of Vertebral Body Lesions
While Onyx embolization of cerebrospinal arteriovenous shunts is well-established, clinical researchers continue to broaden applications to other vascular lesions of the neuraxis. This report illustrates the application of Onyx (eV3, Plymouth, MN) embolization to vertebral body lesions, specifically, a vertebral hemangioma and renal cell carcinoma vertebral body metastatic lesion
- …