11 research outputs found

    Differentially Private Location Privacy in Practice

    Get PDF
    With the wide adoption of handheld devices (e.g. smartphones, tablets) a large number of location-based services (also called LBSs) have flourished providing mobile users with real-time and contextual information on the move. Accounting for the amount of location information they are given by users, these services are able to track users wherever they go and to learn sensitive information about them (e.g. their points of interest including home, work, religious or political places regularly visited). A number of solutions have been proposed in the past few years to protect users location information while still allowing them to enjoy geo-located services. Among the most robust solutions are those that apply the popular notion of differential privacy to location privacy (e.g. Geo-Indistinguishability), promising strong theoretical privacy guarantees with a bounded accuracy loss. While these theoretical guarantees are attracting, it might be difficult for end users or practitioners to assess their effectiveness in the wild. In this paper, we carry on a practical study using real mobility traces coming from two different datasets, to assess the ability of Geo-Indistinguishability to protect users' points of interest (POIs). We show that a curious LBS collecting obfuscated location information sent by mobile users is still able to infer most of the users POIs with a reasonable both geographic and semantic precision. This precision depends on the degree of obfuscation applied by Geo-Indistinguishability. Nevertheless, the latter also has an impact on the overhead incurred on mobile devices resulting in a privacy versus overhead trade-off. Finally, we show in our study that POIs constitute a quasi-identifier for mobile users and that obfuscating them using Geo-Indistinguishability is not sufficient as an attacker is able to re-identify at least 63% of them despite a high degree of obfuscation.Comment: In Proceedings of the Third Workshop on Mobile Security Technologies (MoST) 2014 (http://arxiv.org/abs/1410.6674

    Privacy-preserving Publication of Mobility Data with High Utility

    Full text link
    An increasing amount of mobility data is being collected every day by different means, e.g., by mobile phone operators. This data is sometimes published after the application of simple anonymization techniques, which might lead to severe privacy threats. We propose in this paper a new solution whose novelty is twofold. Firstly, we introduce an algorithm designed to hide places where a user stops during her journey (namely points of interest), by enforcing a constant speed along her trajectory. Secondly, we leverage places where users meet to take a chance to swap their trajectories and therefore confuse an attacker.Comment: 2015 35th IEEE International Conference on Distributed Computed System

    Privacy-Aware Adversarial Network in Human Mobility Prediction

    Get PDF
    As mobile devices and location-based services are increasingly developed in different smart city scenarios and applications, many unexpected privacy leakages have arisen due to geolocated data collection and sharing. User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications. Significantly, four spatio-temporal points are enough to uniquely identify 95\% of the individuals, which exacerbates personal information leakages. To tackle malicious purposes such as user re-identification, we propose an LSTM-based adversarial mechanism with representation learning to attain a privacy-preserving feature representation of the original geolocated data (i.e., mobility data) for a sharing purpose. These representations aim to maximally reduce the chance of user re-identification and full data reconstruction with a minimal utility budget (i.e., loss). We train the mechanism by quantifying privacy-utility trade-off of mobility datasets in terms of trajectory reconstruction risk, user re-identification risk, and mobility predictability. We report an exploratory analysis that enables the user to assess this trade-off with a specific loss function and its weight parameters. The extensive comparison results on four representative mobility datasets demonstrate the superiority of our proposed architecture in mobility privacy protection and the efficiency of the proposed privacy-preserving features extractor. We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility. Our results also show that by exploring the Pareto optimal setting, we can simultaneously increase both privacy (45%) and utility (32%)

    Enhanced Internet Mobility and Privacy Using Public Cloud

    Get PDF

    PULP: Achieving Privacy and Utility Trade-off in User Mobility Data

    Get PDF
    International audienceLeveraging location information in location-based services leads to improving service utility through geo-contextualization. However, this raises privacy concerns as new knowledge can be inferred from location records, such as user's home and work places, or personal habits. Although Location Privacy Protection Mechanisms (LPPMs) provide a means to tackle this problem, they often require manual configuration posing significant challenges to service providers and users. Moreover, their impact on data privacy and utility is seldom assessed. In this paper, we present PULP, a model-driven system which automatically provides user-specific privacy protection and contributes to service utility via choosing adequate LPPM and configuring it. At the heart of PULP is nonlinear models that can capture the complex dependency of data privacy and utility for each individual user under given LPPM considered, i.e., Geo-Indistinguishability and Promesse. According to users' preferences on privacy and utility, PULP efficiently recommends suitable LPPM and corresponding configuration. We evaluate the accuracy of PULP's models and its effectiveness to achieve the privacy-utility trade-off per user, using four real-world mobility traces of 770 users in total. Our extensive experimentation shows that PULP ensures the contribution to location service while adhering to privacy constraints for a great percentage of users, and is orders of magnitude faster than non-model based alternatives

    Automatic Privacy and Utility Preservation of Mobility Data: A Nonlinear Model-Based Approach

    Get PDF
    International audienceThe widespread use of mobile devices and location-based services has generated a large number of mobility databases. While processing these data is highly valuable, privacy issues can occur if personal information is revealed. The prior art has investigated ways to protect mobility data by providing a wide range of Location Privacy Protection Mechanisms (LPPMs). However, the privacy level of the protected data significantly varies depending on the protection mechanism used, its configuration and on the characteristics of the mobility data. Meanwhile, the protected data still needs to enable some useful processing. To tackle these issues, we present PULP, a framework that finds the suitable protection mechanism and automatically configures it for each user in order to achieve user-defined objectives in terms of both privacy and utility. PULP uses nonlinear models to capture the impact of each LPPM on data privacy and utility levels. Evaluation of our framework is carried out with two protectionmechanisms from the literature and four real-world mobility datasets. Results show the efficiency of PULP, its robustness and adaptability. Comparisons between LPPMs’ configurators and the state of the art further illustrate that PULP better realizes users’ objectives, and its computation time is in orders of magnitude faster

    The Long Road to Computational Location Privacy: A Survey

    Get PDF
    The widespread adoption of continuously connected smartphones and tablets developed the usage of mobile applications, among which many use location to provide geolocated services. These services provide new prospects for users: getting directions to work in the morning, leaving a check-in at a restaurant at noon and checking next day's weather in the evening are possible right from any mobile device embedding a GPS chip. In these location-based applications, the user's location is sent to a server, which uses them to provide contextual and personalised answers. However, nothing prevents the latter from gathering, analysing and possibly sharing the collected information, which opens the door to many privacy threats. Indeed, mobility data can reveal sensitive information about users, among which one's home, work place or even religious and political preferences. For this reason, many privacy-preserving mechanisms have been proposed these last years to enhance location privacy while using geolocated services. This article surveys and organises contributions in this area from classical building blocks to the most recent developments of privacy threats and location privacy-preserving mechanisms. We divide the protection mechanisms between online and offline use cases, and organise them into six categories depending on the nature of their algorithm. Moreover, this article surveys the evaluation metrics used to assess protection mechanisms in terms of privacy, utility and performance. Finally, open challenges and new directions to address the problem of computational location privacy are pointed out and discussed

    Context and Semantic Aware Location Privacy

    Get PDF
    With ever-increasing computational power, and improved sensing and communication capabilities, smart devices have altered and enhanced the way we process, perceive and interact with information. Personal and contextual data is tracked and stored extensively on these devices and, oftentimes, ubiquitously sent to online service providers. This routine is proving to be quite privacy-invasive, since these service providers mine the data they collect in order to infer more and more personal information about users. Protecting privacy in the rise of mobile applications is a critical challenge. The continuous tracking of users with location- and time-stamps expose their private lives at an alarming level. Location traces can be used to infer intimate aspects of users' lives such as interests, political orientation, religious beliefs, and even more. Traditional approaches to protecting privacy fail to meet users' expectations due to simplistic adversary models and the lack of a multi-dimensional awareness. In this thesis, the development of privacy-protection approaches is pushed further by (i) adapting to concrete adversary capabilities and (ii) investigating the threat of strong adversaries that exploit location semantics. We first study user mobility and spatio-temporal correlations in continuous disclosure scenarios (e.g., sensing applications), where the more frequently a user discloses her location, the more difficult it becomes to protect. To counter this threat, we develop adversary- and mobility-aware privacy protection mechanisms that aim to minimize an adversary's exploitation of user mobility. We demonstrate that a privacy protection mechanism must actively evaluate privacy risks in order to adapt its protection parameters. We further develop an Android library that provides on-device location privacy evaluation and enables any location-based application to support privacy-preserving services. We also implement an adversary-aware protection mechanism in this library with semantic-based privacy settings. Furthermore, we study the effects of an adversary that exploits location semantics in order to strengthen his attacks on user traces. Such extensive information is available to an adversary via maps of points of interest, but also from users themselves. Typically, users of online social networks want to announce their whereabouts to their circles. They do so mostly, if not always, by sharing the type of their location along with the geographical coordinates. We formalize this setting and by using Bayesian inference show that if location semantics of traces is disclosed, users' privacy levels drop considerably. Moreover, we study the time-of-day information and its relation to location semantics. We reveal that an adversary can breach privacy further by exploiting time-dependency of semantics. We implement and evaluate a sensitivity-aware protection mechanism in this setting as well. The battle for privacy requires social awareness and will to win. However, the slow progress on the front of law and regulations pushes the need for technological solutions. This thesis concludes that we have a long way to cover in order to establish privacy-enhancing technologies in our age of information. Our findings opens up new venues for a more expeditious understanding of privacy risks and thus their prevention
    corecore