481 research outputs found
Privacy-Preserving Vehicle Assignment for Mobility-on-Demand Systems
Urban transportation is being transformed by mobility-on-demand (MoD)
systems. One of the goals of MoD systems is to provide personalized
transportation services to passengers. This process is facilitated by a
centralized operator that coordinates the assignment of vehicles to individual
passengers, based on location data. However, current approaches assume that
accurate positioning information for passengers and vehicles is readily
available. This assumption raises privacy concerns. In this work, we address
this issue by proposing a method that protects passengers' drop-off locations
(i.e., their travel destinations). Formally, we solve a batch assignment
problem that routes vehicles at obfuscated origin locations to passenger
locations (since origin locations correspond to previous drop-off locations),
such that the mean waiting time is minimized. Our main contributions are
two-fold. First, we formalize the notion of privacy for continuous
vehicle-to-passenger assignment in MoD systems, and integrate a privacy
mechanism that provides formal guarantees. Second, we present a scalable
algorithm that takes advantage of superfluous (idle) vehicles in the system,
combining multiple iterations of the Hungarian algorithm to allocate a
redundant number of vehicles to a single passenger. As a result, we are able to
reduce the performance deterioration induced by the privacy mechanism. We
evaluate our methods on a real, large-scale data set consisting of over 11
million taxi rides (specifying vehicle availability and passenger requests),
recorded over a month's duration, in the area of Manhattan, New York. Our work
demonstrates that privacy can be integrated into MoD systems without incurring
a significant loss of performance, and moreover, that this loss can be further
minimized at the cost of deploying additional (redundant) vehicles into the
fleet.Comment: 8 pages; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), 201
Constructing elastic distinguishability metrics for location privacy
With the increasing popularity of hand-held devices, location-based
applications and services have access to accurate and real-time location
information, raising serious privacy concerns for their users. The recently
introduced notion of geo-indistinguishability tries to address this problem by
adapting the well-known concept of differential privacy to the area of
location-based systems. Although geo-indistinguishability presents various
appealing aspects, it has the problem of treating space in a uniform way,
imposing the addition of the same amount of noise everywhere on the map. In
this paper we propose a novel elastic distinguishability metric that warps the
geometrical distance, capturing the different degrees of density of each area.
As a consequence, the obtained mechanism adapts the level of noise while
achieving the same degree of privacy everywhere. We also show how such an
elastic metric can easily incorporate the concept of a "geographic fence" that
is commonly employed to protect the highly recurrent locations of a user, such
as his home or work. We perform an extensive evaluation of our technique by
building an elastic metric for Paris' wide metropolitan area, using semantic
information from the OpenStreetMap database. We compare the resulting mechanism
against the Planar Laplace mechanism satisfying standard
geo-indistinguishability, using two real-world datasets from the Gowalla and
Brightkite location-based social networks. The results show that the elastic
mechanism adapts well to the semantics of each area, adjusting the noise as we
move outside the city center, hence offering better overall privacy
Optimal Geo-Indistinguishable Mechanisms for Location Privacy
We consider the geo-indistinguishability approach to location privacy, and
the trade-off with respect to utility. We show that, given a desired degree of
geo-indistinguishability, it is possible to construct a mechanism that
minimizes the service quality loss, using linear programming techniques. In
addition we show that, under certain conditions, such mechanism also provides
optimal privacy in the sense of Shokri et al. Furthermore, we propose a method
to reduce the number of constraints of the linear program from cubic to
quadratic, maintaining the privacy guarantees and without affecting
significantly the utility of the generated mechanism. This reduces considerably
the time required to solve the linear program, thus enlarging significantly the
location sets for which the optimal mechanisms can be computed.Comment: 13 page
PULP: Achieving Privacy and Utility Trade-off in User Mobility Data
International audienceLeveraging location information in location-based services leads to improving service utility through geo-contextualization. However, this raises privacy concerns as new knowledge can be inferred from location records, such as user's home and work places, or personal habits. Although Location Privacy Protection Mechanisms (LPPMs) provide a means to tackle this problem, they often require manual configuration posing significant challenges to service providers and users. Moreover, their impact on data privacy and utility is seldom assessed. In this paper, we present PULP, a model-driven system which automatically provides user-specific privacy protection and contributes to service utility via choosing adequate LPPM and configuring it. At the heart of PULP is nonlinear models that can capture the complex dependency of data privacy and utility for each individual user under given LPPM considered, i.e., Geo-Indistinguishability and Promesse. According to users' preferences on privacy and utility, PULP efficiently recommends suitable LPPM and corresponding configuration. We evaluate the accuracy of PULP's models and its effectiveness to achieve the privacy-utility trade-off per user, using four real-world mobility traces of 770 users in total. Our extensive experimentation shows that PULP ensures the contribution to location service while adhering to privacy constraints for a great percentage of users, and is orders of magnitude faster than non-model based alternatives
Linear and Range Counting under Metric-based Local Differential Privacy
Local differential privacy (LDP) enables private data sharing and analytics
without the need for a trusted data collector. Error-optimal primitives (for,
e.g., estimating means and item frequencies) under LDP have been well studied.
For analytical tasks such as range queries, however, the best known error bound
is dependent on the domain size of private data, which is potentially
prohibitive. This deficiency is inherent as LDP protects the same level of
indistinguishability between any pair of private data values for each data
downer.
In this paper, we utilize an extension of -LDP called Metric-LDP or
-LDP, where a metric defines heterogeneous privacy guarantees for
different pairs of private data values and thus provides a more flexible knob
than does to relax LDP and tune utility-privacy trade-offs. We show
that, under such privacy relaxations, for analytical workloads such as linear
counting, multi-dimensional range counting queries, and quantile queries, we
can achieve significant gains in utility. In particular, for range queries
under -LDP where the metric is the -distance function scaled by
, we design mechanisms with errors independent on the domain sizes;
instead, their errors depend on the metric , which specifies in what
granularity the private data is protected. We believe that the primitives we
design for -LDP will be useful in developing mechanisms for other analytical
tasks, and encourage the adoption of LDP in practice
- …