833 research outputs found

    When private set intersection meets big data : an efficient and scalable protocol

    Get PDF
    Large scale data processing brings new challenges to the design of privacy-preserving protocols: how to meet the increasing requirements of speed and throughput of modern applications, and how to scale up smoothly when data being protected is big. Efficiency and scalability become critical criteria for privacy preserving protocols in the age of Big Data. In this paper, we present a new Private Set Intersection (PSI) protocol that is extremely efficient and highly scalable compared with existing protocols. The protocol is based on a novel approach that we call oblivious Bloom intersection. It has linear complexity and relies mostly on efficient symmetric key operations. It has high scalability due to the fact that most operations can be parallelized easily. The protocol has two versions: a basic protocol and an enhanced protocol, the security of the two variants is analyzed and proved in the semi-honest model and the malicious model respectively. A prototype of the basic protocol has been built. We report the result of performance evaluation and compare it against the two previously fastest PSI protocols. Our protocol is orders of magnitude faster than these two protocols. To compute the intersection of two million-element sets, our protocol needs only 41 seconds (80-bit security) and 339 seconds (256-bit security) on moderate hardware in parallel mode

    Security and Efficiency Analysis of the Hamming Distance Computation Protocol Based on Oblivious Transfer

    Get PDF
    open access articleBringer et al. proposed two cryptographic protocols for the computation of Hamming distance. Their first scheme uses Oblivious Transfer and provides security in the semi-honest model. The other scheme uses Committed Oblivious Transfer and is claimed to provide full security in the malicious case. The proposed protocols have direct implications to biometric authentication schemes between a prover and a verifier where the verifier has biometric data of the users in plain form. In this paper, we show that their protocol is not actually fully secure against malicious adversaries. More precisely, our attack breaks the soundness property of their protocol where a malicious user can compute a Hamming distance which is different from the actual value. For biometric authentication systems, this attack allows a malicious adversary to pass the authentication without knowledge of the honest user's input with at most O(n)O(n) complexity instead of O(2n)O(2^n), where nn is the input length. We propose an enhanced version of their protocol where this attack is eliminated. The security of our modified protocol is proven using the simulation-based paradigm. Furthermore, as for efficiency concerns, the modified protocol utilizes Verifiable Oblivious Transfer which does not require the commitments to outputs which improves its efficiency significantly

    Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications

    Get PDF
    We present Chameleon, a novel hybrid (mixed-protocol) framework for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs. Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing. In particular, the framework performs linear operations in the ring Z2l\mathbb{Z}_{2^l} using additively secret shared values and nonlinear operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson protocol. Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase. Almost all of the heavy cryptographic operations are precomputed in an offline phase which substantially reduces the communication overhead. Chameleon is both scalable and significantly more efficient than the ABY framework (NDSS'15) it is based on. Our framework supports signed fixed-point numbers. In particular, Chameleon's vector dot product of signed fixed-point numbers improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer convolutional deep neural network shows 133x and 4.2x faster executions than Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively

    Efficient Verifiable Computation of XOR for Biometric Authentication

    Full text link
    This work addresses the security and privacy issues in remotebiometric authentication by proposing an efficient mechanism to verifythe correctness of the outsourced computation in such protocols.In particular, we propose an efficient verifiable computation of XORingencrypted messages using an XOR linear message authenticationcode (MAC) and we employ the proposed scheme to build a biometricauthentication protocol. The proposed authentication protocol is bothsecure and privacy-preserving against malicious (as opposed to honest-but-curious) adversaries. Specifically, the use of the verifiable computation scheme together with an homomorphic encryption protects the privacy of biometric templates against malicious adversaries. Furthermore, in order to achieve unlinkability of authentication attempts, while keeping a low communication overhead, we show how to apply Oblivious RAM and biohashing to our protocol. We also provide a proof of security for the proposed solution. Our simulation results show that the proposed authentication protocol is efficient

    Oblivious Sensor Fusion via Secure Multi-Party Combinatorial Filter Evaluation

    Get PDF
    This thesis examines the problem of fusing data from several sensors, potentially distributed throughout an environment, in order to consolidate readings into a single coherent view. We consider the setting when sensor units do not wish others to know their specific sensor streams. Standard methods for handling this fusion make no guarantees about what a curious observer may learn. Motivated by applications where data sources may only choose to participate if given privacy guarantees, we introduce a fusion approach that limits what can be inferred. Our approach is to form an aggregate stream, oblivious to the underlying sensor data, and to evaluate a combinatorial filter on that stream. This is achieved via secure multi-party computational techniques built on cryptographic primitives, which we extend and apply to the problem of fusing discrete sensor signals. We prove that the extensions preserve security under the semi- honest adversary model. Though the approach enables several applications of potential interest, we specifically consider a target tracking case study as a running example. Finally, we also report on a basic, proof-of-concept implementation, demonstrating that it can operate in practice; which we report and analyze the (empirical) running times for components in the architecture, suggesting directions for future improvement

    DR.SGX: Hardening SGX Enclaves against Cache Attacks with Data Location Randomization

    Full text link
    Recent research has demonstrated that Intel's SGX is vulnerable to various software-based side-channel attacks. In particular, attacks that monitor CPU caches shared between the victim enclave and untrusted software enable accurate leakage of secret enclave data. Known defenses assume developer assistance, require hardware changes, impose high overhead, or prevent only some of the known attacks. In this paper we propose data location randomization as a novel defensive approach to address the threat of side-channel attacks. Our main goal is to break the link between the cache observations by the privileged adversary and the actual data accesses by the victim. We design and implement a compiler-based tool called DR.SGX that instruments enclave code such that data locations are permuted at the granularity of cache lines. We realize the permutation with the CPU's cryptographic hardware-acceleration units providing secure randomization. To prevent correlation of repeated memory accesses we continuously re-randomize all enclave data during execution. Our solution effectively protects many (but not all) enclaves from cache attacks and provides a complementary enclave hardening technique that is especially useful against unpredictable information leakage
    • ā€¦
    corecore