22,622 research outputs found

    On Fairness in Secure Computation

    Get PDF
    Secure computation is a fundamental problem in modern cryptography in which multiple parties join to compute a function of their private inputs without revealing anything beyond the output of the function. A series of very strong results in the 1980's demonstrated that any polynomial-time function can be computed while guaranteeing essentially every desired security property. The only exception is the fairness property, which states that no player should receive their output from the computation unless all players receive their output. While it was shown that fairness can be achieved whenever a majority of players are honest, it was also shown that fairness is impossible to achieve in general when half or more of the players are dishonest. Indeed, it was proven that even boolean XOR cannot be computed fairly by two parties The fairness property is both natural and important, and as such it was one of the first questions addressed in modern cryptography (in the context of signature exchange). One contribution of this thesis is to survey the many approaches that have been used to guarantee different notions of partial fairness. We then revisit the topic of fairness within a modern security framework for secure computation. We demonstrate that, despite the strong impossibility result mentioned above, certain interesting functions can be computed fairly, even when half (or more) of the parties are malicious. We also provide a new notion of partial fairness, demonstrate feasibility of achieving this notion for a large class of functions, and show impossibility for certain functions outside this class. We consider fairness in the presence of rational adversaries, and, finally, we further study the difficulty of achieving fairness by exploring how much external help is necessary for enabling fair secure computation

    Secure Multiparty Computation with Partial Fairness

    Get PDF
    A protocol for computing a functionality is secure if an adversary in this protocol cannot cause more harm than in an ideal computation where parties give their inputs to a trusted party which returns the output of the functionality to all parties. In particular, in the ideal model such computation is fair -- all parties get the output. Cleve (STOC 1986) proved that, in general, fairness is not possible without an honest majority. To overcome this impossibility, Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition -- 1/p-secure computation -- which guarantees partial fairness. For two parties, they construct 1/p-secure protocols for functionalities for which the size of either their domain or their range is polynomial (in the security parameter). Gordon and Katz ask whether their results can be extended to multiparty protocols. We study 1/p-secure protocols in the multiparty setting for general functionalities. Our main result is constructions of 1/p-secure protocols when the number of parties is constant provided that less than 2/3 of the parties are corrupt. Our protocols require that either (1) the functionality is deterministic and the size of the domain is polynomial (in the security parameter), or (2) the functionality can be randomized and the size of the range is polynomial. If the size of the domain is constant and the functionality is deterministic, then our protocol is efficient even when the number of parties is O(log log n) (where n is the security parameter). On the negative side, we show that when the number of parties is super-constant, 1/p-secure protocols are not possible when the size of the domain is polynomial

    Improvements to Secure Computation with Penalties

    Get PDF
    Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that tolerate an arbitrary number of corruptions. In this work, we improve the efficiency of protocols for secure computation with penalties in a hybrid model where parties have access to the “claim-or-refund” transaction functionality. Our first improvement is for the ladder protocol of Bentov and Kumaresan (Crypto 2014) where we improve the dependence of the script complexity of the protocol (which corresponds to miner verification load and also space on the blockchain) on the number of parties from quadratic to linear (and in particular, is completely independent of the underlying function). Our second improvement is for the see-saw protocol of Kumaresan et al. (CCS 2015) where we reduce the total number of claim-or-refund transactions and also the script complexity from quadratic to linear in the number of parties. We also present a ‘dual-mode’ protocol that offers different guarantees depending on the number of corrupt parties: (1) when s n/2 parties are corrupt, this protocol guarantees fairness with penalties (i.e., if the adversary gets the output, then either the honest parties get output as well or they get compensation via penalizing the adversary). The above protocol works as long as t+s < n, matching the bound obtained for secure computation protocols in the standard model (i.e., replacing “fairness with penalties” with “securitywith-abort” (full security except fairness)) by Ishai et al. (SICOMP 2011). Keywords: Bitcoin, secure computation, fairness.National Science Foundation (U.S.) (Grant CNS-1350619)National Science Foundation (U.S.) (Grant CNS1414119)Alfred P. Sloan Foundation (Research Fellowship)Microsoft (Faculty Fellowship

    Secure and fair two-party computation

    Get PDF
    Consider several parties that do not trust each other, yet they wish to correctly compute some common function of their local inputs while keeping these inputs private. This problem is known as "Secure Multi-Party Computation", and was introduced by Andrew Yao in 1982. Secure multi-party computations have some real world examples like electronic auctions, electronic voting or fingerprinting. In this thesis we consider the case where there are only two parties involved. This is known as "Secure Two-Party Computation". If there is a trusted third party called Carol, then the problem is pretty straightforward. The participating parties could hand their inputs in Carol who can compute the common function correctly and could return the outputs to the corresponding parties. The goal is to achieve (almost) the same result when there is no trusted third party. Cryptographic protocols are designed in order to solve these kinds of problems. These protocols are analyzed within an appropriate model in which the behavior of parties is structured. The basic level is called the Semi-Honest Model where parties are assumed to follow the protocol specification, but later can derive additional information based on the messages which have been received so far. A more realistic model is the so-called Malicious Model. The common approach is to first analyze a protocol in the semi-honest model and then later extend it into the malicious model. Any cryptographic protocol for secure two-party computation must satisfy the following security requirements: correctness, privacy and fairness. It must guarantee the correctness of the result while preserving the privacy of the parties’ inputs, even if one of the parties is malicious and behaves arbitrarily throughout the protocol. It must also guarantee fairness. This roughly means that whenever a party aborts the protocol prematurely, he or she should not have any advantage over the other party in discovering the output. The main question for researchers is to construct new protocols that achieve the above mentioned goals for secure multi-party computation. Of course, such protocols must be secure in a given model, as well as be as efficient as possible. In 1986, Yao presented the first general protocol for secure two-party computation which was applicable only to the semi-honest model. He uses a tool called "Garbled Circuit". Yao’s protocol uses the underlying primitives ("Pseudorandom Generator" and "Oblivious Transfer") as blackboxes which lead to efficient results. After Yao’s work many variants and improvements have been proposed for the malicious model. In this thesis, we design several new protocols for secure two-party computation based on Yao’s garbled circuit. Before we present the details of our new designs, we first show several weaknesses, security flaws or problems with the existing protocols in the literature. We first work in the semi-honest model and then extend it into the malicious model by presenting new protocols. Finally we add fairness to our protocol. Oblivious transfer (OT) is a fundamental primitive in modern cryptography which is useful for implementing protocols for secure multi-party computation. We study several variants of oblivious transfer in this thesis. We present a new protocol for the so-called "Committed OT". This protocol is very efficient in the sense that it is quite good in comparison to the most efficient committed OT protocols in the literature. The abovementioned flaw with the use of OT can be fixed with our committed oblivious transfer protocol. Furthermore, it is more general than all previous protocols, and, therefore, it is of independent interest. We also deal with fairness in this thesis. For protocols based on garbled circuit, so far only Benny Pinkas has presented a protocol in the literature for achieving fairness. We show a subtle problem with this protocol where the privacy of the inputs of one party can be compromised. We also describe this problem in detail which is in fact related to the fairness, and finally propose a more efficient scheme that does achieve fairness

    Secure Computation with Non-Equivalent Penalties in Constant Rounds

    Get PDF
    It is known that Bitcoin enables to achieve fairness in secure computation by imposing a monetary penalty on adversarial parties. This functionality is called secure computation with penalties. Bentov and Kumaresan (Crypto 2014) showed that it could be realized with O(n) rounds and O(n) broadcasts for any function, where n is the number of parties. Kumaresan and Bentov (CCS 2014) posed an open question: "Is it possible to design secure computation with penalties that needs only O(1) rounds and O(n) broadcasts?" In this work, we introduce secure computation with non-equivalent penalties, and design a protocol achieving this functionality with O(1) rounds and O(n) broadcasts only. The new functionality is the same as secure computation with penalties except that every honest party receives more than a predetermined amount of compensation while the previous one requires that every honest party receives the same amount of compensation. In particular, both are the same if all parties behave honestly. Thus, our result gives a partial answer to the open problem with a slight and natural modification of functionality

    Fairness versus Guaranteed Output Delivery in Secure Multiparty Computation

    Get PDF
    In the setting of secure multiparty computation, a set of parties wish to compute a joint function of their private inputs. The computation should preserve security properties such as privacy, correctness, independence of inputs, fairness and guaranteed output delivery. In the case of no honest majority, fairness and guaranteed output delivery cannot always be obtained. Thus, protocols for secure multiparty computation are typically of two disparate types: protocols that assume an honest majority (and achieve all properties \emph{including} fairness and guaranteed output delivery), and protocols that do not assume an honest majority (and achieve all properties \emph{except for} fairness and guaranteed output delivery). In addition, in the two-party case, fairness and guaranteed output delivery are equivalent. As a result, the properties of fairness (which means that if corrupted parties receive output then so do the honest parties) and guaranteed output delivery (which means that corrupted parties cannot prevent the honest parties from receiving output in any case) have typically been considered to be the same. In this paper, we initiate a study of the relation between fairness and guaranteed output delivery in secure multiparty computation. We show that in the multiparty setting these properties are distinct and proceed to study under what conditions fairness implies guaranteed output delivery (the opposite direction always holds). We also show the existence of non-trivial functions for which complete fairness is achievable (without an honest majority) but guaranteed output delivery is not, and the existence of non-trivial functions for which complete fairness and guaranteed output delivery are achievable. Our study sheds light on the role of broadcast in fairness and guaranteed output delivery, and shows that these properties should sometimes be considered separately

    Fair Differentially Private Federated Learning Framework

    Full text link
    Federated learning (FL) is a distributed machine learning strategy that enables participants to collaborate and train a shared model without sharing their individual datasets. Privacy and fairness are crucial considerations in FL. While FL promotes privacy by minimizing the amount of user data stored on central servers, it still poses privacy risks that need to be addressed. Industry standards such as differential privacy, secure multi-party computation, homomorphic encryption, and secure aggregation protocols are followed to ensure privacy in FL. Fairness is also a critical issue in FL, as models can inherit biases present in local datasets, leading to unfair predictions. Balancing privacy and fairness in FL is a challenge, as privacy requires protecting user data while fairness requires representative training data. This paper presents a "Fair Differentially Private Federated Learning Framework" that addresses the challenges of generating a fair global model without validation data and creating a globally private differential model. The framework employs clipping techniques for biased model updates and Gaussian mechanisms for differential privacy. The paper also reviews related works on privacy and fairness in FL, highlighting recent advancements and approaches to mitigate bias and ensure privacy. Achieving privacy and fairness in FL requires careful consideration of specific contexts and requirements, taking into account the latest developments in industry standards and techniques.Comment: Paper report for WASP module

    Efficiently Making Secure Two-Party Computation Fair

    Get PDF
    Secure two-party computation cannot be fair against malicious adversaries, unless a trusted third party (TTP) or a gradual-release type super-constant round protocol is employed. Existing optimistic fair two-party computation protocols with constant rounds are either too costly to arbitrate (e.g., the TTP may need to re-do almost the whole computation), or require the use of electronic payments. Furthermore, most of the existing solutions were proven secure and fair via a partial simulation, which, we show, may lead to insecurity overall. We propose a new framework for fair and secure two-party computation that can be applied on top of any secure two party computation protocol based on Yao's garbled circuits and zero-knowledge proofs. We show that our fairness overhead is minimal, compared to all known existing work. Furthermore, our protocol is fair even in terms of the work performed by Alice and Bob. We also prove our protocol is fair and secure simultaneously, through one simulator, which guarantees that our fairness extensions do not leak any private information. Lastly, we ensure that the TTP never learns the inputs or outputs of the computation. Therefore, even if the TTP becomes malicious and causes unfairness by colluding with one party, the security of the underlying protocol is still preserved

    SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning

    Get PDF
    Performing machine learning (ML) computation on private data while maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an emergent field of research. Recently, PPML has seen a visible shift towards the adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy computation that it entails. In the SOC paradigm, computation is outsourced to a set of powerful and specially equipped servers that provide service on a pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for a range of ML algorithms in SOC setting, that guarantees output delivery to the users irrespective of any adversarial behaviour. Robustness, a highly desirable feature, evokes user participation without the fear of denial of service. At the heart of our framework lies a highly-efficient, maliciously-secure, three-party computation (3PC) over rings that provides guaranteed output delivery (GOD) in the honest-majority setting. To the best of our knowledge, SWIFT is the first robust and efficient PPML framework in the 3PC setting. SWIFT is as fast as (and is strictly better in some cases than) the best-known 3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20) and twice faster than the best-known robust 4PC framework FLASH (Byali et al. PETS'20). We demonstrate our framework's practical relevance by benchmarking popular ML algorithms such as Logistic Regression and deep Neural Networks such as VGG16 and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results testify to our claims that we provide improved security guarantee while incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.Comment: This article is the full and extended version of an article to appear in USENIX Security 202

    General Partially Fair Multi-Party Computation with VDFs

    Get PDF
    Gordon and Katz, in Partial Fairness in Secure Two-Party Computation , present a protocol for two-party computation with partial fairness which depends on presumptions on the size of the input or output of the functionality. They also show that for some other functionalities, this notion of partial fairness is impossible to achieve. In this work, we get around this impossibility result using verifiable delay functions, a primitive which brings in an assumption on the inability of an adversary to compute a certain function in a specified time. We present a gadget using VDFs which allows for any MPC to be carried out with ≈ 1/R partial fairness, where R is the number of communication rounds
    • 

    corecore