3 research outputs found
Commodity-Based 2PC for Arithmetic Circuits
We revisit the framework of Commodity-based Cryptography presented by Beaver (STOC\u2797) with a focus on updating the framework to fit with modern multiparty computation (MPC) protocols. We study the possibility of replacing the well-known preprocessing model with a commodity-based setting, where a set of independent servers (some of which may be corrupt) provide clients with correlated randomness. From this, the clients then distill correct and secure correlated randomness that they can use during the online phase of the MPC protocol.
Beaver showed how to do OT with semi-honest security in the commodity setting. We improve on Beaver\u27s result as follows: In a model where one of two clients and a constant fraction of the servers may be maliciously corrupted, we obtain unconditionally secure multiplication triples and oblivious linear evaluations (OLEs) such that the amortized communication cost of one triple/OLE is a constant number of field elements (when the field is sufficiently large). We also report on results from an implementation of the OLE protocol. Finally, we suggest an approach to practical realization of a commodity based system where servers need no memory and can be accessed asynchronously by clients, but still a maliciously corrupt client cannot get data he should not have access to
HyFL: A Hybrid Framework For Private Federated Learning
Federated learning (FL) has emerged as an efficient approach for large-scale
distributed machine learning, ensuring data privacy by keeping training data on
client devices. However, recent research has highlighted vulnerabilities in FL,
including the potential disclosure of sensitive information through individual
model updates and even the aggregated global model. While much attention has
been given to clients' data privacy, limited research has addressed the issue
of global model privacy. Furthermore, local training at the client's side has
opened avenues for malicious clients to launch powerful model poisoning
attacks. Unfortunately, no existing work has provided a comprehensive solution
that tackles all these issues. Therefore, we introduce HyFL, a hybrid framework
that enables data and global model privacy while facilitating large-scale
deployments. The foundation of HyFL is a unique combination of secure
multi-party computation (MPC) techniques with hierarchical federated learning.
One notable feature of HyFL is its capability to prevent malicious clients from
executing model poisoning attacks, confining them to less destructive data
poisoning alone. We evaluate HyFL's effectiveness using an open-source
PyTorch-based FL implementation integrated with Meta's CrypTen PPML framework.
Our performance evaluation demonstrates that HyFL is a promising solution for
trustworthy large-scale FL deployment.Comment: HyFL combines private training and inference with secure aggregation
and hierarchical FL to provide end-to-end protection and to facilitate
large-scale global deploymen
TaaS: Commodity MPC via Triples-as-a-Service
We propose a mechanism for an m-party dishonest majority Multi-Party Computation (MPC) protocol to obtain the required
pre-processing data (called Beaver Triples), from a subset of a set of cloud service providers; providing a form of TaaS (Triples-as-a-Service). The service providers used by the MPC computing parties can be selected dynamically at the point of the MPC computation being run, and the interaction between the MPC parties and the TaaS parties is via a single round of ommunication, logged on a public ledger. The TaaS is itself instantiated as an MPC protocol which produces the triples for a different access structure. Thus our protocol also acts as a translation mechanism between the secret sharing used by one MPC protocol and the other