4 research outputs found

    Analysing Fairness of Privacy-Utility Mobility Models

    Full text link
    Preserving the individuals' privacy in sharing spatial-temporal datasets is critical to prevent re-identification attacks based on unique trajectories. Existing privacy techniques tend to propose ideal privacy-utility tradeoffs, however, largely ignore the fairness implications of mobility models and whether such techniques perform equally for different groups of users. The quantification between fairness and privacy-aware models is still unclear and there barely exists any defined sets of metrics for measuring fairness in the spatial-temporal context. In this work, we define a set of fairness metrics designed explicitly for human mobility, based on structural similarity and entropy of the trajectories. Under these definitions, we examine the fairness of two state-of-the-art privacy-preserving models that rely on GAN and representation learning to reduce the re-identification rate of users for data sharing. Our results show that while both models guarantee group fairness in terms of demographic parity, they violate individual fairness criteria, indicating that users with highly similar trajectories receive disparate privacy gain. We conclude that the tension between the re-identification task and individual fairness needs to be considered for future spatial-temporal data analysis and modelling to achieve a privacy-preserving fairness-aware setting

    Privacy-Aware Location Sharing with Deep Reinforcement Learning

    No full text
    Location-based services (LBSs) have become widely popular. Despite their utility, these services raise concerns for privacy since they require sharing location information with untrusted third parties. In this work, we study privacy-utility trade-off in location sharing mechanisms. Existing approaches are mainly focused on privacy of sharing a single location or myopic location trace privacy; neither of them taking into account the temporal correlations between the past and current locations. Although these methods preserve the privacy for the current time, they may leak significant amount of information at the trace level as the adversary can exploit temporal correlations in a trace. We propose an information theoretically optimal privacy preserving location release mechanism that takes temporal correlations into account. We measure the privacy leakage by the mutual information between the user's true and released location traces. To tackle the history-dependent mutual information minimization, we reformulate the problem as a Markov decision process (MDP), and solve it using asynchronous actor-critic deep reinforcement learning (RL)
    corecore