18,327 research outputs found

    Information Theoretically Secure Multi Party Set Intersection Re-Visited

    Get PDF
    We re-visit the problem of secure multiparty set intersection in information theoretic settings. In \cite{LiSetMPCACNS07}, Li et.al have proposed a protocol for multiparty set intersection problem with nn parties, that provides information theoretic security, when t<n3t < \frac{n}{3} parties are corrupted by an active adversary having {\it unbounded computing power}. In \cite{LiSetMPCACNS07}, the authors claimed that their protocol takes six rounds of communication and communicates O(n4m2){\cal O}(n^4m^2) field elements, where each party has a set containing mm field elements. However, we show that the round and communication complexity of the protocol in \cite{LiSetMPCACNS07} is much more than what is claimed in \cite{LiSetMPCACNS07}. We then propose a {\it novel} information theoretically secure protocol for multiparty set intersection with n>3tn > 3t, which significantly improves the actual round and communication complexity (as shown in this paper) of the protocol given in \cite{LiSetMPCACNS07}. To design our protocol, we use several tools which are of independent interest

    Implementation of a Secure Multiparty Computation Protocol

    Get PDF
    Secure multiparty computation (SMC) allows a set of parties to jointly compute a function on private inputs such that, they learn only the output of the function, and the correctness of the output is guaranteed even when a subset of the parties is controlled by an adversary. SMC allows data to be kept in an uncompromisable form and still be useful, and it also gives new meaning to data ownership, allowing data to be shared in a useful way while retaining its privacy. Thus, applications of SMC hold promise for addressing some of the security issues information-driven societies struggle with. In this thesis, we implement two SMC protocols. Our primary objective is to gain a solid understanding of the basic concepts related to SMC. We present a brief survey of the field, with focus on SMC based on secret sharing. In addition to the protocol im- plementations, we implement circuit randomization, a common technique for efficiency improvement. The implemented protocols are run on a simulator to securely evaluate some simple arithmetic functions, and the round complexities of the implemented protocols are compared. Finally, we attempt to extend the implementation to support more general computations

    Privacy in the Genomic Era

    Get PDF
    Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward

    Privacy-Preserving Adaptive Traffic Signal Control in a Connected Vehicle Environment

    Full text link
    Although Connected Vehicles (CVs) have demonstrated tremendous potential to enhance traffic operations, they can impose privacy risks on individual travelers, e.g., leaking sensitive information about their frequently visited places, routing behavior, etc. Despite the large body of literature that devises various algorithms to exploit CV information, research on privacy-preserving traffic control is still in its infancy. In this paper, we aim to fill this research gap and propose a privacy-preserving adaptive traffic signal control method using CV data. Specifically, we leverage secure Multi-Party Computation and differential privacy to devise a privacy-preserving CV data aggregation mechanism, which can calculate key traffic quantities without any CVs having to reveal their private data. We further develop a linear optimization model for adaptive signal control based on the traffic variables obtained via the data aggregation mechanism. The proposed linear programming problem is further extended to a stochastic programming problem to explicitly handle the noises added by the differentially private mechanism. Evaluation results show that the linear optimization model preserves privacy with a marginal impact on control performance, and the stochastic programming model can significantly reduce residual queues compared to the linear programming model, with almost no increase in vehicle delay. Overall, our methods demonstrate the feasibility of incorporating privacy-preserving mechanisms in CV-based traffic modeling and control, which guarantees both utility and privacy

    AnonPSI: An Anonymity Assessment Framework for PSI

    Full text link
    Private Set Intersection (PSI) is a widely used protocol that enables two parties to securely compute a function over the intersected part of their shared datasets and has been a significant research focus over the years. However, recent studies have highlighted its vulnerability to Set Membership Inference Attacks (SMIA), where an adversary might deduce an individual's membership by invoking multiple PSI protocols. This presents a considerable risk, even in the most stringent versions of PSI, which only return the cardinality of the intersection. This paper explores the evaluation of anonymity within the PSI context. Initially, we highlight the reasons why existing works fall short in measuring privacy leakage, and subsequently propose two attack strategies that address these deficiencies. Furthermore, we provide theoretical guarantees on the performance of our proposed methods. In addition to these, we illustrate how the integration of auxiliary information, such as the sum of payloads associated with members of the intersection (PSI-SUM), can enhance attack efficiency. We conducted a comprehensive performance evaluation of various attack strategies proposed utilizing two real datasets. Our findings indicate that the methods we propose markedly enhance attack efficiency when contrasted with previous research endeavors. {The effective attacking implies that depending solely on existing PSI protocols may not provide an adequate level of privacy assurance. It is recommended to combine privacy-enhancing technologies synergistically to enhance privacy protection even further

    Secure and Efficient Comparisons between Untrusted Parties

    Get PDF
    A vast number of online services is based on users contributing their personal information. Examples are manifold, including social networks, electronic commerce, sharing websites, lodging platforms, and genealogy. In all cases user privacy depends on a collective trust upon all involved intermediaries, like service providers, operators, administrators or even help desk staff. A single adversarial party in the whole chain of trust voids user privacy. Even more, the number of intermediaries is ever growing. Thus, user privacy must be preserved at every time and stage, independent of the intrinsic goals any involved party. Furthermore, next to these new services, traditional offline analytic systems are replaced by online services run in large data centers. Centralized processing of electronic medical records, genomic data or other health-related information is anticipated due to advances in medical research, better analytic results based on large amounts of medical information and lowered costs. In these scenarios privacy is of utmost concern due to the large amount of personal information contained within the centralized data. We focus on the challenge of privacy-preserving processing on genomic data, specifically comparing genomic sequences. The problem that arises is how to efficiently compare private sequences of two parties while preserving confidentiality of the compared data. It follows that the privacy of the data owner must be preserved, which means that as little information as possible must be leaked to any party participating in the comparison. Leakage can happen at several points during a comparison. The secured inputs for the comparing party might leak some information about the original input, or the output might leak information about the inputs. In the latter case, results of several comparisons can be combined to infer information about the confidential input of the party under observation. Genomic sequences serve as a use-case, but the proposed solutions are more general and can be applied to the generic field of privacy-preserving comparison of sequences. The solution should be efficient such that performing a comparison yields runtimes linear in the length of the input sequences and thus producing acceptable costs for a typical use-case. To tackle the problem of efficient, privacy-preserving sequence comparisons, we propose a framework consisting of three main parts. a) The basic protocol presents an efficient sequence comparison algorithm, which transforms a sequence into a set representation, allowing to approximate distance measures over input sequences using distance measures over sets. The sets are then represented by an efficient data structure - the Bloom filter -, which allows evaluation of certain set operations without storing the actual elements of the possibly large set. This representation yields low distortion for comparing similar sequences. Operations upon the set representation are carried out using efficient, partially homomorphic cryptographic systems for data confidentiality of the inputs. The output can be adjusted to either return the actual approximated distance or the result of an in-range check of the approximated distance. b) Building upon this efficient basic protocol we introduce the first mechanism to reduce the success of inference attacks by detecting and rejecting similar queries in a privacy-preserving way. This is achieved by generating generalized commitments for inputs. This generalization is done by treating inputs as messages received from a noise channel, upon which error-correction from coding theory is applied. This way similar inputs are defined as inputs having a hamming distance of their generalized inputs below a certain predefined threshold. We present a protocol to perform a zero-knowledge proof to assess if the generalized input is indeed a generalization of the actual input. Furthermore, we generalize a very efficient inference attack on privacy-preserving sequence comparison protocols and use it to evaluate our inference-control mechanism. c) The third part of the framework lightens the computational load of the client taking part in the comparison protocol by presenting a compression mechanism for partially homomorphic cryptographic schemes. It reduces the transmission and storage overhead induced by the semantically secure homomorphic encryption schemes, as well as encryption latency. The compression is achieved by constructing an asymmetric stream cipher such that the generated ciphertext can be converted into a ciphertext of an associated homomorphic encryption scheme without revealing any information about the plaintext. This is the first compression scheme available for partially homomorphic encryption schemes. Compression of ciphertexts of fully homomorphic encryption schemes are several orders of magnitude slower at the conversion from the transmission ciphertext to the homomorphically encrypted ciphertext. Indeed our compression scheme achieves optimal conversion performance. It further allows to generate keystreams offline and thus supports offloading to trusted devices. This way transmission-, storage- and power-efficiency is improved. We give security proofs for all relevant parts of the proposed protocols and algorithms to evaluate their security. A performance evaluation of the core components demonstrates the practicability of our proposed solutions including a theoretical analysis and practical experiments to show the accuracy as well as efficiency of approximations and probabilistic algorithms. Several variations and configurations to detect similar inputs are studied during an in-depth discussion of the inference-control mechanism. A human mitochondrial genome database is used for the practical evaluation to compare genomic sequences and detect similar inputs as described by the use-case. In summary we show that it is indeed possible to construct an efficient and privacy-preserving (genomic) sequences comparison, while being able to control the amount of information that leaves the comparison. To the best of our knowledge we also contribute to the field by proposing the first efficient privacy-preserving inference detection and control mechanism, as well as the first ciphertext compression system for partially homomorphic cryptographic systems

    Private and Oblivious Set and Multiset Operations

    Get PDF
    Privacy-preserving set operations, and set intersection in particular, are a popular research topic. Despite a large body of literature, the great majority of the available solutions are two-party protocols and are not composable. In this work we design a comprehensive suite of secure multi-party protocols for set and multiset operations that are composable, do not assume any knowledge of the sets by the parties carrying out the secure computation, and can be used for secure outsourcing. All of our protocols have communication and computation complexity of O(mlog⁥m)O(m \log m) for sets or multisets of size mm, which compares favorably with prior work. Furthermore, we are not aware of any results that realize composable operations. Our protocols are secure in the information theoretic sense and are designed to minimize the round complexity. Practicality of our solutions is shown through experimental results
    • 

    corecore