8 research outputs found
Incentivized Outsourced Computation Resistant to Malicious Contractors
With the rise of Internet computing, outsourcing difficult computational tasks became an important need. Yet, once the computation is outsourced, the job owner loses control, and hence it is crucial to provide guarantees against malicious actions of the contractors involved. Cryptographers have an almost perfect solution, called fully homomorphic encryption, to this problem. This solution hides both the job itself and any inputs to it from the contractors, while still enabling them to perform the necessary computation over the encrypted data. This is a very strong security guarantee, but the current constructions are highly impractical.
In this paper, we propose a different approach to outsourcing computational tasks. We are not concerned with hiding the job or the data, but our main task is to ensure that the job is computed correctly. We also observe that not all contractors are malicious; rather, majority are rational. Thus, our approach brings together elements from cryptography, as well as game theory and mechanism design. We achieve the following results: (1) We incentivize all the rational contractors to perform the outsourced job correctly, (2) we guarantee high fraction (e.g., 99.9%) of correct results even in the existence of a relatively large fraction (e.g., 33%) of malicious irrational contractors in the system, (3) and we show that our system achieves these while being almost as efficient as running the job locally (e.g., with only 3% overhead). Such a high correctness guarantee was not known to be achieved with such efficiency
Fast Optimistically Fair Cut-and-Choose 2PC
Secure two party computation (2PC) is a well-studied problem with many real world applications. Due to Cleve\u27s result on general impossibility of fairness, however, the state-of-the-art solutions only provide security with abort. We investigate fairness for 2PC in presence of a trusted Arbiter, in an optimistic setting where the Arbiter is not involved if the parties act fairly. Existing fair solutions in this setting are by far less efficient than the fastest unfair 2PC.
We close this efficiency gap by designing protocols for fair 2PC with covert and malicious security that have competitive performance with the state-of-the-art unfair constructions. In particular, our protocols only requires the exchange of a few extra messages with sizes that only depend on the output length; the Arbiter\u27s load is independent of the computation size; and a malicious Arbiter can only break fairness, but not covert/malicious security even if he colludes with a party. Finally, our solutions are designed to work with the state-of-the-art optimizations applicable to garbled circuits and cut-and-choose 2PC such as free-XOR, half-gates, and the cheating-recovery paradigm
Versatile ABS: Usage Limited, Revocable, Threshold Traceable, Authority Hiding, Decentralized Attribute Based Signatures
In this work, we revisit multi-authority attribute based signatures (MA-ABS), and elaborate on the limitations of the current MA-ABS schemes to provide a hard to achieve (yet very useful) combination of features, i.e., decentralization, periodic usage limitation, dynamic revocation of users and attributes, reliable threshold traceability, and authority hiding. In contrast to previous work, we disallow even the authorities to de-anonymize an ABS, and only allow joint tracing by threshold-many tracing authorities. Moreover, in our solution, the authorities cannot sign on behalf of users. In this context, first we define a useful and practical attribute based signature scheme (versatile ABS or VABS) along with the necessary operations and security games to accomplish our targeted functionalities. Second, we provide the first VABS scheme in a modular design such that any application can utilize a subset of the features endowed by our VABS, while omitting the computation and communication overhead of the features that are not needed. Third, we prove the security of our VABS scheme based on standard assumptions, i.e., Strong RSA, DDH, and SDDHI, in the random oracle model. Fourth, we implement our signature generation and verification algorithms, and show that they are practical (for a VABS with 20 attributes, Sign and Verify times are below 1.2 seconds, and the generated signature size is below 0.5 MB)
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning
The increasing popularity of the federated learning (FL) framework due to its
success in a wide range of collaborative learning tasks also induces certain
security concerns. Among many vulnerabilities, the risk of Byzantine attacks is
of particular concern, which refers to the possibility of malicious clients
participating in the learning process. Hence, a crucial objective in FL is to
neutralize the potential impact of Byzantine attacks and to ensure that the
final model is trustable. It has been observed that the higher the variance
among the clients' models/updates, the more space there is for Byzantine
attacks to be hidden. As a consequence, by utilizing momentum, and thus,
reducing the variance, it is possible to weaken the strength of known Byzantine
attacks. The centered clipping (CC) framework has further shown that the
momentum term from the previous iteration, besides reducing the variance, can
be used as a reference point to neutralize Byzantine attacks better. In this
work, we first expose vulnerabilities of the CC framework, and introduce a
novel attack strategy that can circumvent the defences of CC and other robust
aggregators and reduce their test accuracy up to %33 on best-case scenarios in
image classification tasks. Then, we propose a new robust and fast defence
mechanism that is effective against the proposed and other existing Byzantine
attacks.Comment: IEEE Transactions on Information Forensics and Security 202
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Training deep neural networks often forces users to work in a distributed or
outsourced setting, accompanied with privacy concerns. Split learning aims to
address this concern by distributing the model among a client and a server. The
scheme supposedly provides privacy, since the server cannot see the clients'
models and inputs. We show that this is not true via two novel attacks. (1) We
show that an honest-but-curious split learning server, equipped only with the
knowledge of the client neural network architecture, can recover the input
samples and obtain a functionally similar model to the client model, without
being detected. (2) We show that if the client keeps hidden only the output
layer of the model to "protect" the private labels, the honest-but-curious
server can infer the labels with perfect accuracy. We test our attacks using
various benchmark datasets and against proposed privacy-enhancing extensions to
split learning. Our results show that plaintext split learning can pose serious
risks, ranging from data (input) privacy to intellectual property (model
parameters), and provide no more than a false sense of security.Comment: Proceedings of the 21st Workshop on Privacy in the Electronic Society
(WPES '22), November 7, 2022, Los Angeles, CA, US
SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning
Distributed deep learning frameworks, such as split learning, have recently been proposed to enable a group of participants to collaboratively train a deep neural network without sharing their raw data. Split learning in particular achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal the client\u27s private data: the server can direct the client model towards learning a task of its choice. With a concrete example already proposed, such training-hijacking attacks present a significant risk for the data privacy of split learning clients.
In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate its effectiveness, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries