7,194 research outputs found

    GraphSE2^2: An Encrypted Graph Database for Privacy-Preserving Social Search

    Full text link
    In this paper, we propose GraphSE2^2, an encrypted graph database for online social network services to address massive data breaches. GraphSE2^2 preserves the functionality of social search, a key enabler for quality social network services, where social search queries are conducted on a large-scale social graph and meanwhile perform set and computational operations on user-generated contents. To enable efficient privacy-preserving social search, GraphSE2^2 provides an encrypted structural data model to facilitate parallel and encrypted graph data access. It is also designed to decompose complex social search queries into atomic operations and realise them via interchangeable protocols in a fast and scalable manner. We build GraphSE2^2 with various queries supported in the Facebook graph search engine and implement a full-fledged prototype. Extensive evaluations on Azure Cloud demonstrate that GraphSE2^2 is practical for querying a social graph with a million of users.Comment: This is the full version of our AsiaCCS paper "GraphSE2^2: An Encrypted Graph Database for Privacy-Preserving Social Search". It includes the security proof of the proposed scheme. If you want to cite our work, please cite the conference version of i

    Anonymous subject identification and privacy information management in video surveillance

    Get PDF
    The widespread deployment of surveillance cameras has raised serious privacy concerns, and many privacy-enhancing schemes have been recently proposed to automatically redact images of selected individuals in the surveillance video for protection. Of equal importance are the privacy and efficiency of techniques to first, identify those individuals for privacy protection and second, provide access to original surveillance video contents for security analysis. In this paper, we propose an anonymous subject identification and privacy data management system to be used in privacy-aware video surveillance. The anonymous subject identification system uses iris patterns to identify individuals for privacy protection. Anonymity of the iris-matching process is guaranteed through the use of a garbled-circuit (GC)-based iris matching protocol. A novel GC complexity reduction scheme is proposed by simplifying the iris masking process in the protocol. A user-centric privacy information management system is also proposed that allows subjects to anonymously access their privacy information via their iris patterns. The system is composed of two encrypted-domain protocols: The privacy information encryption protocol encrypts the original video records using the iris pattern acquired during the subject identification phase; the privacy information retrieval protocol allows the video records to be anonymously retrieved through a GC-based iris pattern matching process. Experimental results on a public iris biometric database demonstrate the validity of our framework

    Longitude : a privacy-preserving location sharing protocol for mobile applications

    Get PDF
    Location sharing services are becoming increasingly popular. Although many location sharing services allow users to set up privacy policies to control who can access their location, the use made by service providers remains a source of concern. Ideally, location sharing providers and middleware should not be able to access users’ location data without their consent. In this paper, we propose a new location sharing protocol called Longitude that eases privacy concerns by making it possible to share a user’s location data blindly and allowing the user to control who can access her location, when and to what degree of precision. The underlying cryptographic algorithms are designed for GPS-enabled mobile phones. We describe and evaluate our implementation for the Nexus One Android mobile phone

    Data Leak Detection As a Service: Challenges and Solutions

    Get PDF
    We describe a network-based data-leak detection (DLD) technique, the main feature of which is that the detection does not require the data owner to reveal the content of the sensitive data. Instead, only a small amount of specialized digests are needed. Our technique – referred to as the fuzzy fingerprint – can be used to detect accidental data leaks due to human errors or application flaws. The privacy-preserving feature of our algorithms minimizes the exposure of sensitive data and enables the data owner to safely delegate the detection to others.We describe how cloud providers can offer their customers data-leak detection as an add-on service with strong privacy guarantees. We perform extensive experimental evaluation on the privacy, efficiency, accuracy and noise tolerance of our techniques. Our evaluation results under various data-leak scenarios and setups show that our method can support accurate detection with very small number of false alarms, even when the presentation of the data has been transformed. It also indicates that the detection accuracy does not degrade when partial digests are used. We further provide a quantifiable method to measure the privacy guarantee offered by our fuzzy fingerprint framework

    EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set Similarity

    Full text link
    Electronic information is increasingly often shared among entities without complete mutual trust. To address related security and privacy issues, a few cryptographic techniques have emerged that support privacy-preserving information sharing and retrieval. One interesting open problem in this context involves two parties that need to assess the similarity of their datasets, but are reluctant to disclose their actual content. This paper presents an efficient and provably-secure construction supporting the privacy-preserving evaluation of sample set similarity, where similarity is measured as the Jaccard index. We present two protocols: the first securely computes the (Jaccard) similarity of two sets, and the second approximates it, using MinHash techniques, with lower complexities. We show that our novel protocols are attractive in many compelling applications, including document/multimedia similarity, biometric authentication, and genetic tests. In the process, we demonstrate that our constructions are appreciably more efficient than prior work.Comment: A preliminary version of this paper was published in the Proceedings of the 7th ESORICS International Workshop on Digital Privacy Management (DPM 2012). This is the full version, appearing in the Journal of Computer Securit

    ODIN: Obfuscation-based privacy-preserving consensus algorithm for Decentralized Information fusion in smart device Networks

    Get PDF
    The large spread of sensors and smart devices in urban infrastructures are motivating research in the area of the Internet of Things (IoT) to develop new services and improve citizens’ quality of life. Sensors and smart devices generate large amounts of measurement data from sensing the environment, which is used to enable services such as control of power consumption or traffic density. To deal with such a large amount of information and provide accurate measurements, service providers can adopt information fusion, which given the decentralized nature of urban deployments can be performed by means of consensus algorithms. These algorithms allow distributed agents to (iteratively) compute linear functions on the exchanged data, and take decisions based on the outcome, without the need for the support of a central entity. However, the use of consensus algorithms raises several security concerns, especially when private or security critical information is involved in the computation. In this article we propose ODIN, a novel algorithm allowing information fusion over encrypted data. ODIN is a privacy-preserving extension of the popular consensus gossip algorithm, which prevents distributed agents from having direct access to the data while they iteratively reach consensus; agents cannot access even the final consensus value but can only retrieve partial information (e.g., a binary decision). ODIN uses efficient additive obfuscation and proxy re-encryption during the update steps and garbled circuits to make final decisions on the obfuscated consensus. We discuss the security of our proposal and show its practicability and efficiency on real-world resource-constrained devices, developing a prototype implementation for Raspberry Pi devices

    ODIN: Obfuscation-based privacy-preserving consensus algorithm for Decentralized Information fusion in smart device Networks

    Get PDF
    The large spread of sensors and smart devices in urban infrastructures are motivating research in the area of the Internet of Things (IoT) to develop new services and improve citizens’ quality of life. Sensors and smart devices generate large amounts of measurement data from sensing the environment, which is used to enable services such as control of power consumption or traffic density. To deal with such a large amount of information and provide accurate measurements, service providers can adopt information fusion, which given the decentralized nature of urban deployments can be performed by means of consensus algorithms. These algorithms allow distributed agents to (iteratively) compute linear functions on the exchanged data, and take decisions based on the outcome, without the need for the support of a central entity. However, the use of consensus algorithms raises several security concerns, especially when private or security critical information is involved in the computation. In this article we propose ODIN, a novel algorithm allowing information fusion over encrypted data. ODIN is a privacy-preserving extension of the popular consensus gossip algorithm, which prevents distributed agents from having direct access to the data while they iteratively reach consensus; agents cannot access even the final consensus value but can only retrieve partial information (e.g., a binary decision). ODIN uses efficient additive obfuscation and proxy re-encryption during the update steps and garbled circuits to make final decisions on the obfuscated consensus. We discuss the security of our proposal and show its practicability and efficiency on real-world resource-constrained devices, developing a prototype implementation for Raspberry Pi devices
    corecore