3 research outputs found

    A uniformity-based approach to location privacy

    Get PDF
    As location-based services emerge, many people feel exposed to high privacy threats. Privacy protection is a major challenge for such services and related applications. A simple approach is perturbation, which adds an artificial noise to positions and returns an obfuscated measurement to the requester. Our main finding is that, unless the noise is chosen properly, these methods do not withstand attacks based on statistical analysis. In this paper, we propose UniLO, an obfuscation operator which offers high assurances on obfuscation uniformity, even in case of imprecise location measurement. We also deal with service differentiation by proposing three UniLO-based obfuscation algorithms that offer multiple contemporaneous levels of privacy. Finally, we experimentally prove the superiority of the proposed algorithms compared to the state-of-the-art solutions, both in terms of utility and resistance against inference attacks

    Location obfuscation and distance-based attacks on private trajectories: an experimental evaluation on real trajectory data sets

    Get PDF
    With the wide availability of GPS-enabled mobile devices, spatio-temporal data is being collected and stored for providing location-based services or for data analytics. Location-based advertisement, market research, and data mining are just some of the motivations for collecting spatio-temporal data. However, location data is also very sensitive since it may reveal information about the data subject such as his/her political views, religion, state of health, and various personal preferences, which is considered private information. One of the approaches to protect sensitive location data is obfuscation. In this thesis, we have implemented two location obfuscation techniques, performed an analytical and experimental study to investigate how e ective they are on a state of the art attack algorithm designed for spatio-temporal data. In the attack scenario, given a set of known trajectories, and a distance matrix composed of known pairwise distances between trajectories, adversary tries to approximate the target trajectory and then extract information about absence or presence of the trajectory in a given area. We used obfuscation techniques to hide information around predefined sensitive places such as hospitals, medical centers. We then used obfuscated data on the attack. Experimental results show that the applied obfuscation methods do not help protecting the privacy of users in sensitive areas in case of spatio-temporal trajectories that follow a regular pattern. We observed that the attack method works successfully because the obfuscation techniques do not scatter the sensitive points far enough from sensitive places and the linearity of the trajectory is preserved

    Uniform Obfuscation for Location Privacy

    No full text
    As location-based services emerge, many people feel exposed to high privacy threats. Privacy protection is a major challenge for such applications. A broadly used approach is perturbation, which adds an ar- tificial noise to positions and returns an obfuscated measurement to the requester. Our main finding is that, unless the noise is chosen properly, these methods do not withstand attacks based on probabilistic analysis. In this paper, we define a strong adversary model that uses probability calculus to de-obfuscate the location measurements. Such a model has general applicability and can evaluate the resistance of a generic location- obfuscation technique. We then propose UniLO, an obfuscation operator which resists to such an adversary. We prove the resistance through for- mal analysis. We finally compare the resistance of UniLO with respect to other noise-based obfuscation operators
    corecore