10,368 research outputs found

    Constructing elastic distinguishability metrics for location privacy

    Full text link
    With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a "geographic fence" that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris' wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy

    Privacy-Preserving Vehicle Assignment for Mobility-on-Demand Systems

    Full text link
    Urban transportation is being transformed by mobility-on-demand (MoD) systems. One of the goals of MoD systems is to provide personalized transportation services to passengers. This process is facilitated by a centralized operator that coordinates the assignment of vehicles to individual passengers, based on location data. However, current approaches assume that accurate positioning information for passengers and vehicles is readily available. This assumption raises privacy concerns. In this work, we address this issue by proposing a method that protects passengers' drop-off locations (i.e., their travel destinations). Formally, we solve a batch assignment problem that routes vehicles at obfuscated origin locations to passenger locations (since origin locations correspond to previous drop-off locations), such that the mean waiting time is minimized. Our main contributions are two-fold. First, we formalize the notion of privacy for continuous vehicle-to-passenger assignment in MoD systems, and integrate a privacy mechanism that provides formal guarantees. Second, we present a scalable algorithm that takes advantage of superfluous (idle) vehicles in the system, combining multiple iterations of the Hungarian algorithm to allocate a redundant number of vehicles to a single passenger. As a result, we are able to reduce the performance deterioration induced by the privacy mechanism. We evaluate our methods on a real, large-scale data set consisting of over 11 million taxi rides (specifying vehicle availability and passenger requests), recorded over a month's duration, in the area of Manhattan, New York. Our work demonstrates that privacy can be integrated into MoD systems without incurring a significant loss of performance, and moreover, that this loss can be further minimized at the cost of deploying additional (redundant) vehicles into the fleet.Comment: 8 pages; Submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 201

    On the Measurement of Privacy as an Attacker's Estimation Error

    Get PDF
    A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.Comment: This paper has 18 pages and 17 figure

    Rethinking Location Privacy for Unknown Mobility Behaviors

    Full text link
    Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely consider that users' data available for training wholly characterizes their mobility patterns. Thus, they hardwire this information in their designs and evaluate their privacy properties with these same data. In this paper, we aim to understand the impact of this decision on the level of privacy these LPPMs may offer in real life when the users' mobility data may be different from the data used in the design phase. Our results show that, in many cases, training data does not capture users' behavior accurately and, thus, the level of privacy provided by the LPPM is often overestimated. To address this gap between theory and practice, we propose to use blank-slate models for LPPM design. Contrary to the hardwired approach, that assumes known users' behavior, blank-slate models learn the users' behavior from the queries to the service provider. We leverage this blank-slate approach to develop a new family of LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we empirically show that our proposal outperforms optimal state-of-the-art mechanisms designed on sporadic hardwired models. On non-sporadic location privacy scenarios, our method is only better if the usage of the location privacy service is not continuous. It is our hope that eliminating the need to bootstrap the mechanisms with training data and ensuring that the mechanisms are lightweight and easy to compute help fostering the integration of location privacy protections in deployed systems

    Preventing Location-Based Identity Inference in Anonymous Spatial Queries

    Get PDF
    The increasing trend of embedding positioning capabilities (for example, GPS) in mobile devices facilitates the widespread use of Location-Based Services. For such applications to succeed, privacy and confidentiality are essential. Existing privacy-enhancing techniques rely on encryption to safeguard communication channels, and on pseudonyms to protect user identities. Nevertheless, the query contents may disclose the physical location of the user. In this paper, we present a framework for preventing location-based identity inference of users who issue spatial queries to Location-Based Services. We propose transformations based on the well-established K-anonymity concept to compute exact answers for range and nearest neighbor search, without revealing the query source. Our methods optimize the entire process of anonymizing the requests and processing the transformed spatial queries. Extensive experimental studies suggest that the proposed techniques are applicable to real-life scenarios with numerous mobile users
    corecore