1,086 research outputs found
Time Distortion Anonymization for the Publication of Mobility Data with High Utility
An increasing amount of mobility data is being collected every day by
different means, such as mobile applications or crowd-sensing campaigns. This
data is sometimes published after the application of simple anonymization
techniques (e.g., putting an identifier instead of the users' names), which
might lead to severe threats to the privacy of the participating users.
Literature contains more sophisticated anonymization techniques, often based on
adding noise to the spatial data. However, these techniques either compromise
the privacy if the added noise is too little or the utility of the data if the
added noise is too strong. We investigate in this paper an alternative
solution, which builds on time distortion instead of spatial distortion.
Specifically, our contribution lies in (1) the introduction of the concept of
time distortion to anonymize mobility datasets (2) Promesse, a protection
mechanism implementing this concept (3) a practical study of Promesse compared
to two representative spatial distortion mechanisms, namely Wait For Me, which
enforces k-anonymity, and Geo-Indistinguishability, which enforces differential
privacy. We evaluate our mechanism practically using three real-life datasets.
Our results show that time distortion reduces the number of points of interest
that can be retrieved by an adversary to under 3 %, while the introduced
spatial error is almost null and the distortion introduced on the results of
range queries is kept under 13 % on average.Comment: in 14th IEEE International Conference on Trust, Security and Privacy
in Computing and Communications, Aug 2015, Helsinki, Finlan
Location Privacy in Spatial Crowdsourcing
Spatial crowdsourcing (SC) is a new platform that engages individuals in
collecting and analyzing environmental, social and other spatiotemporal
information. With SC, requesters outsource their spatiotemporal tasks to a set
of workers, who will perform the tasks by physically traveling to the tasks'
locations. This chapter identifies privacy threats toward both workers and
requesters during the two main phases of spatial crowdsourcing, tasking and
reporting. Tasking is the process of identifying which tasks should be assigned
to which workers. This process is handled by a spatial crowdsourcing server
(SC-server). The latter phase is reporting, in which workers travel to the
tasks' locations, complete the tasks and upload their reports to the SC-server.
The challenge is to enable effective and efficient tasking as well as reporting
in SC without disclosing the actual locations of workers (at least until they
agree to perform a task) and the tasks themselves (at least to workers who are
not assigned to those tasks). This chapter aims to provide an overview of the
state-of-the-art in protecting users' location privacy in spatial
crowdsourcing. We provide a comparative study of a diverse set of solutions in
terms of task publishing modes (push vs. pull), problem focuses (tasking and
reporting), threats (server, requester and worker), and underlying technical
approaches (from pseudonymity, cloaking, and perturbation to exchange-based and
encryption-based techniques). The strengths and drawbacks of the techniques are
highlighted, leading to a discussion of open problems and future work
Privacy-Preserving Trajectory Data Publishing via Differential Privacy
Over the past decade, the collection of data by individuals, businesses and government agencies has increased tremendously. Due to the widespread of mobile computing and the advances in location-acquisition techniques, an immense amount of data concerning the mobility of moving objects have been generated. The movement data of an object (e.g. individual) might include specific information about the locations it visited, the time those locations were visited, or both. While it is beneficial to share data for the purpose of mining and analysis, data sharing might risk the privacy of the individuals involved in the data. Privacy-Preserving Data Publishing (PPDP) provides techniques that utilize several privacy models for the purpose of publishing useful information while preserving data privacy.
The objective of this thesis is to answer the following question: How can a data owner publish trajectory data while simultaneously safeguarding the privacy of the data and maintaining its usefulness? We propose an algorithm for anonymizing and publishing trajectory data that ensures the output is differentially private while maintaining high utility and scalability. Our solution comprises a twofold approach. First, we generalize trajectories by generalizing and then partitioning the timestamps at each location in a differentially private manner. Next, we add noise to the real count of the generalized trajectories according to the given privacy budget to enforce differential privacy. As a result, our approach achieves an overall epsilon-differential privacy on the output trajectory data. We perform experimental evaluation on real-life data, and demonstrate that our proposed approach can effectively answer count and range queries, as well as mining frequent sequential patterns. We also show that our algorithm is efficient w.r.t. privacy budget and number of partitions, and also scalable with increasing data size
Quantifying Differential Privacy in Continuous Data Release under Temporal Correlations
Differential Privacy (DP) has received increasing attention as a rigorous
privacy framework. Many existing studies employ traditional DP mechanisms
(e.g., the Laplace mechanism) as primitives to continuously release private
data for protecting privacy at each time point (i.e., event-level privacy),
which assume that the data at different time points are independent, or that
adversaries do not have knowledge of correlation between data. However,
continuously generated data tend to be temporally correlated, and such
correlations can be acquired by adversaries. In this paper, we investigate the
potential privacy loss of a traditional DP mechanism under temporal
correlations. First, we analyze the privacy leakage of a DP mechanism under
temporal correlation that can be modeled using Markov Chain. Our analysis
reveals that, the event-level privacy loss of a DP mechanism may
\textit{increase over time}. We call the unexpected privacy loss
\textit{temporal privacy leakage} (TPL). Although TPL may increase over time,
we find that its supremum may exist in some cases. Second, we design efficient
algorithms for calculating TPL. Third, we propose data releasing mechanisms
that convert any existing DP mechanism into one against TPL. Experiments
confirm that our approach is efficient and effective.Comment: accepted in TKDE special issue "Best of ICDE 2017". arXiv admin note:
substantial text overlap with arXiv:1610.0754
- …