8,034 research outputs found

    On Designing Resilient Location-Privacy Obfuscators

    Get PDF
    The success of location-based services is growing together with the diffusion of GPS-equipped smart devices. As a consequence, privacy concerns are raising year by year. Location privacy is becoming a major interest in research and industry world, and many solutions have been proposed for it. One of the simplest and most flexible approaches is obfuscation, in which the precision of location data is artificially degraded before disclosing it. In this paper, we present an obfuscation approach capable of dealing with measurement imprecision, multiple levels of privacy, untrusted servers and adversarial knowledge of the map. We estimate its resistance against statistical-based deobfuscation attacks, and we improve it by means of three techniques, namely extreme vectors, enlarge-and-scale and hybrid vectors

    Location privacy-preserving task allocation for mobile crowdsensing with differential geo-obfuscation

    Get PDF
    In traditional mobile crowdsensing applications, organizers need participants' precise locations for optimal task allocation, e.g., minimizing selected workers' travel distance to task locations. However, the exposure of their locations raises privacy concerns. Especially for those who are not eventually selected for any task, their location privacy is sacrificed in vain. Hence, in this paper, we propose a location privacy-preserving task allocation framework with geo-obfuscation to protect users' locations during task assignments. Specifically, we make participants obfuscate their reported locations under the guarantee of differential privacy, which can provide privacy protection regardless of adversaries' prior knowledge and without the involvement of any third- part entity. In order to achieve optimal task allocation with such differential geo- obfuscation, we formulate a mixed-integer non-linear programming problem to minimize the expected travel distance of the selected workers under the constraint of differential privacy. Evaluation results on both simulation and real-world user mobility traces show the effectiveness of our proposed framework. Particularly, our framework outperforms Laplace obfuscation, a state-of-the-art differential geo-obfuscation mechanism, by achieving 45% less average travel distance on the real-world data

    Where you go is who you are -- A study on machine learning based semantic privacy attacks

    Full text link
    Concerns about data privacy are omnipresent, given the increasing usage of digital applications and their underlying business model that includes selling user data. Location data is particularly sensitive since they allow us to infer activity patterns and interests of users, e.g., by categorizing visited locations based on nearby points of interest (POI). On top of that, machine learning methods provide new powerful tools to interpret big data. In light of these considerations, we raise the following question: What is the actual risk that realistic, machine learning based privacy attacks can obtain meaningful semantic information from raw location data, subject to inaccuracies in the data? In response, we present a systematic analysis of two attack scenarios, namely location categorization and user profiling. Experiments on the Foursquare dataset and tracking data demonstrate the potential for abuse of high-quality spatial information, leading to a significant privacy loss even with location inaccuracy of up to 200m. With location obfuscation of more than 1 km, spatial information hardly adds any value, but a high privacy risk solely from temporal information remains. The availability of public context data such as POIs plays a key role in inference based on spatial information. Our findings point out the risks of ever-growing databases of tracking data and spatial context data, which policymakers should consider for privacy regulations, and which could guide individuals in their personal location protection measures

    Semantics-aware obfuscation for location privacy

    Get PDF
    The increasing availability of personal location data pushed by the widespread use of locationsensing technologies raises concerns with respect to the safeguard of location privacy. To address such concerns location privacy-preserving techniques are being investigated. An important area of application for such techniques is represented by Location Based Services (LBS). Many privacy-preserving techniques designed for LBS are based on the idea of forwarding to the LBS provider obfuscated locations, namely position information at low spatial resolution, in place of actual users' positions. Obfuscation techniques are generally based on the use of geometric methods. In this paper, we argue that such methods can lead to the disclosure of sensitive location information and thus to privacy leaks. We thus propose a novel method which takes into account the semantic context in which users are located. The original contribution of the paper is the introduction of a comprehensive framework consisting of a semantic-aware obfuscation model, a novel algorithm for the generation of obfuscated spaces for which we report results from an experimental evaluation and a reference architecture

    Local Obfuscation Mechanisms for Hiding Probability Distributions

    Get PDF
    We introduce a formal model for the information leakage of probability distributions and define a notion called distribution privacy. Roughly, the distribution privacy of a local obfuscation mechanism means that the attacker cannot significantly gain any information on the distribution of the mechanism's input by observing its output. Then we show that existing local mechanisms can hide input distributions in terms of distribution privacy, while deteriorating the utility by adding too much noise. For example, we prove that the Laplace mechanism needs to add a large amount of noise proportionally to the infinite Wasserstein distance between the two distributions we want to make indistinguishable. To improve the tradeoff between distribution privacy and utility, we introduce a local obfuscation mechanism, called a tupling mechanism, that adds random dummy data to the output. Then we apply this mechanism to the protection of user attributes in location based services. By experiments, we demonstrate that the tupling mechanism outperforms popular local mechanisms in terms of attribute obfuscation and service quality.Comment: under submissio

    Local Obfuscation Mechanisms for Hiding Probability Distributions

    Full text link
    We introduce a formal model for the information leakage of probability distributions and define a notion called distribution privacy as the local differential privacy for probability distributions. Roughly, the distribution privacy of a local obfuscation mechanism means that the attacker cannot significantly gain any information on the distribution of the mechanism's input by observing its output. Then we show that existing local mechanisms can hide input distributions in terms of distribution privacy, while deteriorating the utility by adding too much noise. For example, we prove that the Laplace mechanism needs to add a large amount of noise proportionally to the infinite Wasserstein distance between the two distributions we want to make indistinguishable. To improve the tradeoff between distribution privacy and utility, we introduce a local obfuscation mechanism, called a tupling mechanism, that adds random dummy data to the output. Then we apply this mechanism to the protection of user attributes in location based services. By experiments, we demonstrate that the tupling mechanism outperforms popular local mechanisms in terms of attribute obfuscation and service quality.Comment: Full version of Proc. ESORICS 2019 (with a longer appendix

    P2TA: Privacy-preserving task allocation for edge computing enhanced mobile crowdsensing

    Get PDF
    The final publication is available at Elsevier via https://doi.org/10.1016/j.sysarc.2019.01.005. © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In conventional mobile crowdsensing (MCS) applications, the crowdsensing server (CS-server) needs mobile users’ precise locations for optimal task allocation, which raises privacy concerns. This paper proposes a privacy-preserving task allocation framework (called P2TA) for edge computing enhanced MCS, focusing on optimize task acceptance rate while protecting participants’ privacy by introducing edge nodes. The basic idea is that edge nodes act as task assignment agents with privacy protection that prevents an untrusted CS-server from accessing a user’s private data. We begin with a thorough analysis of the limitations of typical task allocation and obfuscation schemes. On this basis, the optimization problem about location obfuscation and task allocation is formulated in consideration of privacy constraints, travel distance and impact of location perturbation. Through problem decomposition, the location obfuscation subproblem is modeled as a leader-follower game between the designer of location obfuscation mechanism and the potential attacker. Against inference attack with background knowledge, a genetic algorithm is introduced to initialize an obfuscation matrix. With the matrix, an edge node makes task allocation decisions that maximize task acceptance rate subject to differential and distortion privacy constraints. The effectiveness and superiority of P2TA compared to exiting task allocation schemes are validated via extensive simulations.The authors gratefully acknowledge the support and financial assistance provided by the National Natural Science Foundation of China under Grant No. 61502230, 61501224 and 61073197, the Natural Science Foundation of Jiangsu Province under Grant No. BK20150960, the National Key R&D Program of China under Grant No. 2018YFC0808500, the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant No. 15KJB520015, and Nanjing Municipal Science and Technology Plan Project under Grant No. 201608009

    Location obfuscation and distance-based attacks on private trajectories: an experimental evaluation on real trajectory data sets

    Get PDF
    With the wide availability of GPS-enabled mobile devices, spatio-temporal data is being collected and stored for providing location-based services or for data analytics. Location-based advertisement, market research, and data mining are just some of the motivations for collecting spatio-temporal data. However, location data is also very sensitive since it may reveal information about the data subject such as his/her political views, religion, state of health, and various personal preferences, which is considered private information. One of the approaches to protect sensitive location data is obfuscation. In this thesis, we have implemented two location obfuscation techniques, performed an analytical and experimental study to investigate how e ective they are on a state of the art attack algorithm designed for spatio-temporal data. In the attack scenario, given a set of known trajectories, and a distance matrix composed of known pairwise distances between trajectories, adversary tries to approximate the target trajectory and then extract information about absence or presence of the trajectory in a given area. We used obfuscation techniques to hide information around predefined sensitive places such as hospitals, medical centers. We then used obfuscated data on the attack. Experimental results show that the applied obfuscation methods do not help protecting the privacy of users in sensitive areas in case of spatio-temporal trajectories that follow a regular pattern. We observed that the attack method works successfully because the obfuscation techniques do not scatter the sensitive points far enough from sensitive places and the linearity of the trajectory is preserved
    • …
    corecore