37,201 research outputs found

    Verifiable private multi-party computation: Ranging and ranking

    Full text link
    Abstract—The existing work on distributed secure multi-party computation, e.g., set operations, dot product, ranking, focus on the privacy protection aspects, while the verifiability of user inputs and outcomes are neglected. Most of the existing works assume that the involved parties will follow the protocol honestly. In practice, a malicious adversary can easily forge his/her input values to achieve incorrect outcomes or simply lie about the computation results to cheat other parities. In this work, we focus on the problem of verifiable privacy preserving multi-party computation. We thoroughly analyze the attacks on existing privacy preserving multi-party computation approaches and design a series of protocols for dot product, ranging and ranking, which are proved to be privacy preserving and verifiable. We implement our protocols on laptops and mobile phones. The results show that our verifiable private computation protocols are efficient both in computation and communication

    Secure and fair two-party computation

    Get PDF
    Consider several parties that do not trust each other, yet they wish to correctly compute some common function of their local inputs while keeping these inputs private. This problem is known as "Secure Multi-Party Computation", and was introduced by Andrew Yao in 1982. Secure multi-party computations have some real world examples like electronic auctions, electronic voting or fingerprinting. In this thesis we consider the case where there are only two parties involved. This is known as "Secure Two-Party Computation". If there is a trusted third party called Carol, then the problem is pretty straightforward. The participating parties could hand their inputs in Carol who can compute the common function correctly and could return the outputs to the corresponding parties. The goal is to achieve (almost) the same result when there is no trusted third party. Cryptographic protocols are designed in order to solve these kinds of problems. These protocols are analyzed within an appropriate model in which the behavior of parties is structured. The basic level is called the Semi-Honest Model where parties are assumed to follow the protocol specification, but later can derive additional information based on the messages which have been received so far. A more realistic model is the so-called Malicious Model. The common approach is to first analyze a protocol in the semi-honest model and then later extend it into the malicious model. Any cryptographic protocol for secure two-party computation must satisfy the following security requirements: correctness, privacy and fairness. It must guarantee the correctness of the result while preserving the privacy of the parties’ inputs, even if one of the parties is malicious and behaves arbitrarily throughout the protocol. It must also guarantee fairness. This roughly means that whenever a party aborts the protocol prematurely, he or she should not have any advantage over the other party in discovering the output. The main question for researchers is to construct new protocols that achieve the above mentioned goals for secure multi-party computation. Of course, such protocols must be secure in a given model, as well as be as efficient as possible. In 1986, Yao presented the first general protocol for secure two-party computation which was applicable only to the semi-honest model. He uses a tool called "Garbled Circuit". Yao’s protocol uses the underlying primitives ("Pseudorandom Generator" and "Oblivious Transfer") as blackboxes which lead to efficient results. After Yao’s work many variants and improvements have been proposed for the malicious model. In this thesis, we design several new protocols for secure two-party computation based on Yao’s garbled circuit. Before we present the details of our new designs, we first show several weaknesses, security flaws or problems with the existing protocols in the literature. We first work in the semi-honest model and then extend it into the malicious model by presenting new protocols. Finally we add fairness to our protocol. Oblivious transfer (OT) is a fundamental primitive in modern cryptography which is useful for implementing protocols for secure multi-party computation. We study several variants of oblivious transfer in this thesis. We present a new protocol for the so-called "Committed OT". This protocol is very efficient in the sense that it is quite good in comparison to the most efficient committed OT protocols in the literature. The abovementioned flaw with the use of OT can be fixed with our committed oblivious transfer protocol. Furthermore, it is more general than all previous protocols, and, therefore, it is of independent interest. We also deal with fairness in this thesis. For protocols based on garbled circuit, so far only Benny Pinkas has presented a protocol in the literature for achieving fairness. We show a subtle problem with this protocol where the privacy of the inputs of one party can be compromised. We also describe this problem in detail which is in fact related to the fairness, and finally propose a more efficient scheme that does achieve fairness

    From usability to secure computing and back again

    Full text link
    Secure multi-party computation (MPC) allows multiple parties to jointly compute the output of a function while preserving the privacy of any individual party’s inputs to that function. As MPC protocols transition from research prototypes to realworld applications, the usability of MPC-enabled applications is increasingly critical to their successful deployment and widespread adoption. Our Web-MPC platform, designed with a focus on usability, has been deployed for privacy-preserving data aggregation initiatives with the City of Boston and the Greater Boston Chamber of Commerce. After building and deploying an initial version of the platform, we conducted a heuristic evaluation to identify usability improvements and implemented corresponding application enhancements. However, it is difficult to gauge the effectiveness of these changes within the context of real-world deployments using traditional web analytics tools without compromising the security guarantees of the platform. This work consists of two contributions that address this challenge: (1) the Web-MPC platform has been extended with the capability to collect web analytics using existing MPC protocols, and (2) as a test of this feature and a way to inform future work, this capability has been leveraged to conduct a usability study comparing the two versions ofWeb-MPC. While many efforts have focused on ways to enhance the usability of privacy-preserving technologies, this study serves as a model for using a privacy-preserving data-driven approach to evaluate and enhance the usability of privacy-preserving websites and applications deployed in realworld scenarios. Data collected in this study yields insights into the relationship between usability and security; these can help inform future implementations of MPC solutions.Published versio

    VirtualIdentity : privacy preserving user profiling

    Get PDF
    User profiling from user generated content (UGC) is a common practice that supports the business models of many social media companies. Existing systems require that the UGC is fully exposed to the module that constructs the user profiles. In this paper we show that it is possible to build user profiles without ever accessing the user's original data, and without exposing the trained machine learning models for user profiling - which are the intellectual property of the company - to the users of the social media site. We present VirtualIdentity, an application that uses secure multi-party cryptographic protocols to detect the age, gender and personality traits of users by classifying their user-generated text and personal pictures with trained support vector machine models in a privacy preserving manner

    On Distributed Differential Privacy and Counting Distinct Elements

    Get PDF
    We study the setup where each of n users holds an element from a discrete set, and the goal is to count the number of distinct elements across all users, under the constraint of (?,?)-differentially privacy: - In the non-interactive local setting, we prove that the additive error of any protocol is ?(n) for any constant ? and for any ? inverse polynomial in n. - In the single-message shuffle setting, we prove a lower bound of ??(n) on the error for any constant ? and for some ? inverse quasi-polynomial in n. We do so by building on the moment-matching method from the literature on distribution estimation. - In the multi-message shuffle setting, we give a protocol with at most one message per user in expectation and with an error of O?(?n) for any constant ? and for any ? inverse polynomial in n. Our protocol is also robustly shuffle private, and our error of ?n matches a known lower bound for such protocols. Our proof technique relies on a new notion, that we call dominated protocols, and which can also be used to obtain the first non-trivial lower bounds against multi-message shuffle protocols for the well-studied problems of selection and learning parity. Our first lower bound for estimating the number of distinct elements provides the first ?(?n) separation between global sensitivity and error in local differential privacy, thus answering an open question of Vadhan (2017). We also provide a simple construction that gives ??(n) separation between global sensitivity and error in two-party differential privacy, thereby answering an open question of McGregor et al. (2011)
    • …
    corecore