374 research outputs found

    Location Privacy in Spatial Crowdsourcing

    Full text link
    Spatial crowdsourcing (SC) is a new platform that engages individuals in collecting and analyzing environmental, social and other spatiotemporal information. With SC, requesters outsource their spatiotemporal tasks to a set of workers, who will perform the tasks by physically traveling to the tasks' locations. This chapter identifies privacy threats toward both workers and requesters during the two main phases of spatial crowdsourcing, tasking and reporting. Tasking is the process of identifying which tasks should be assigned to which workers. This process is handled by a spatial crowdsourcing server (SC-server). The latter phase is reporting, in which workers travel to the tasks' locations, complete the tasks and upload their reports to the SC-server. The challenge is to enable effective and efficient tasking as well as reporting in SC without disclosing the actual locations of workers (at least until they agree to perform a task) and the tasks themselves (at least to workers who are not assigned to those tasks). This chapter aims to provide an overview of the state-of-the-art in protecting users' location privacy in spatial crowdsourcing. We provide a comparative study of a diverse set of solutions in terms of task publishing modes (push vs. pull), problem focuses (tasking and reporting), threats (server, requester and worker), and underlying technical approaches (from pseudonymity, cloaking, and perturbation to exchange-based and encryption-based techniques). The strengths and drawbacks of the techniques are highlighted, leading to a discussion of open problems and future work

    ABAKA : a novel attribute-based k-anonymous collaborative solution for LBSs

    Get PDF
    The increasing use of mobile devices, along with advances in telecommunication systems, increased the popularity of Location-Based Services (LBSs). In LBSs, users share their exact location with a potentially untrusted Location-Based Service Provider (LBSP). In such a scenario, user privacy becomes a major con- cern: the knowledge about user location may lead to her identification as well as a continuous tracing of her position. Researchers proposed several approaches to preserve users’ location privacy. They also showed that hiding the location of an LBS user is not enough to guarantee her privacy, i.e., user’s pro- file attributes or background knowledge of an attacker may reveal the user’s identity. In this paper we propose ABAKA, a novel collaborative approach that provides identity privacy for LBS users considering users’ profile attributes. In particular, our solution guarantees p -sensitive k -anonymity for the user that sends an LBS request to the LBSP. ABAKA computes a cloaked area by collaborative multi-hop forwarding of the LBS query, and using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). We ran a thorough set of experiments to evaluate our solution: the results confirm the feasibility and efficiency of our proposal

    An Approach for Ensuring Robust Support for Location Privacy and Identity Inference Protection

    Get PDF
    The challenge of preserving a user\u27s location privacy is more important now than ever before with the proliferation of handheld devices and the pervasive use of location based services. To protect location privacy, we must ensure k-anonymity so that the user remains indistinguishable among k-1 other users. There is no better way but to use a location anonymizer (LA) to achieve k-anonymity. However, its knowledge of each user\u27s current location makes it susceptible to be a single-point-of-failure. In this thesis, we propose a formal location privacy framework, termed SafeGrid that can work with or without an LA. In SafeGrid, LA is designed in such a way that it is no longer a single point of failure. In addition, it is resistant to known attacks and most significantly, the cloaking algorithm it employs meets reciprocity condition. Simulation results exhibit its better performance in query processing and cloaking region calculation compared with existing solutions. In this thesis, we also show that satisfying k-anonymity is not enough in preserving privacy. Especially in an environment where a group of colluded service providers collaborate with each other, a user\u27s privacy can be compromised through identity inference attacks. We present a detailed analysis of such attacks on privacy and propose a novel and powerful privacy definition called s-proximity. In addition to building a formal definition for s-proximity, we show that it is practical and it can be incorporated efficiently into existing systems to make them secure

    Effective mix-zone anonymization techniques for mobile travelers

    Get PDF
    Mix-zones are recognized as an alternative and complementary approach to spatial cloaking based location privacy protection. Unlike spatial cloaking techniques that perturb the location resolution through location k-anonymization, mix-zones break the continuity of location exposure by ensuring that users' movements cannot be traced while they are inside a mix-zone. In this paper we provide an overview of some known attacks that make mix-zones on road networks vulnerable and discuss a set of counter measures to make road network mix-zones attack-resilient. Concretely, we categorize the vulnerabilities of road network mix-zones into two classes: one due to the road network characteristics and user mobility, and the other due to the temporal, spatial and semantic correlations of location queries. We propose efficient road network mix-zone construction techniques that are resilient to attacks based on road network characteristics. Furthermore, we enhance the road network mix-zone framework with the concept of delay-tolerant mix-zones that introduce a combination of spatial and temporal shifts in the location exposure of the users to achieve higher anonymity. We study the factors that impact on the effectiveness of each of these attacks and evaluate the efficiency of the counter measures through extensive experiments on traces produced by GTMobiSim at different scales of geographic maps. © 2013 Springer Science+Business Media New York

    On the privacy risks of machine learning models

    Get PDF
    Machine learning (ML) has made huge progress in the last decade and has been applied to a wide range of critical applications. However, driven by the increasing adoption of machine learning models, the significance of privacy risks has become more crucial than ever. These risks can be classified into two categories depending on the role played by ML models: one in which the models themselves are vulnerable to leaking sensitive information, and the other in which the models are abused to violate privacy. In this dissertation, we investigate the privacy risks of machine learning models from two perspectives, i.e., the vulnerability of ML models and the abuse of ML models. To study the vulnerability of ML models to privacy risks, we conduct two studies on one of the most severe privacy attacks against ML models, namely the membership inference attack (MIA). Firstly, we explore membership leakage in label-only exposure of ML models. We present the first label-only membership inference attack and reveal that membership leakage is more severe than previously shown. Secondly, we perform the first privacy analysis of multi-exit networks through the lens of membership leakage. We leverage existing attack methodologies to quantify the vulnerability of multi-exit networks to membership inference attacks and propose a hybrid attack that exploits the exit information to improve the attack performance. From the perspective of abusing ML models to violate privacy, we focus on deepfake face manipulation that can create visual misinformation. We propose the first defense system \system against GAN-based face manipulation by jeopardizing the process of GAN inversion, which is an essential step for subsequent face manipulation. All findings contribute to the community's insight into the privacy risks of machine learning models. We appeal to the community's consideration of the in-depth investigation of privacy risks, like ours, against the rapidly-evolving machine learning techniques.Das maschinelle Lernen (ML) hat in den letzten zehn Jahren enorme Fortschritte gemacht und wurde für eine breite Palette wichtiger Anwendungen eingesetzt. Durch den zunehmenden Einsatz von Modellen des maschinellen Lernens ist die Bedeutung von Datenschutzrisiken jedoch wichtiger denn je geworden. Diese Risiken können je nach der Rolle, die ML-Modelle spielen, in zwei Kategorien eingeteilt werden: in eine, in der die Modelle selbst anfällig für das Durchsickern sensibler Informationen sind, und in die andere, in der die Modelle zur Verletzung der Privatsphäre missbraucht werden. In dieser Dissertation untersuchen wir die Datenschutzrisiken von Modellen des maschinellen Lernens aus zwei Blickwinkeln, nämlich der Anfälligkeit von ML-Modellen und dem Missbrauch von ML-Modellen. Um die Anfälligkeit von ML-Modellen für Datenschutzrisiken zu untersuchen, führen wir zwei Studien zu einem der schwerwiegendsten Angriffe auf den Datenschutz von ML-Modellen durch, nämlich dem Angriff auf die Mitgliedschaft (membership inference attack, MIA). Erstens erforschen wir das Durchsickern von Mitgliedschaften in ML-Modellen, die sich nur auf Labels beziehen. Wir präsentieren den ersten "label-only membership inference"-Angriff und stellen fest, dass das "membership leakage" schwerwiegender ist als bisher gezeigt. Zweitens führen wir die erste Analyse der Privatsphäre von Netzwerken mit mehreren Ausgängen durch die Linse des Mitgliedschaftsverlustes durch. Wir nutzen bestehende Angriffsmethoden, um die Anfälligkeit von Multi-Exit-Netzwerken für Membership-Inference-Angriffe zu quantifizieren und schlagen einen hybriden Angriff vor, der die Exit-Informationen ausnutzt, um die Angriffsleistung zu verbessern. Unter dem Gesichtspunkt des Missbrauchs von ML-Modellen zur Verletzung der Privatsphäre konzentrieren wir uns auf die Manipulation von Gesichtern, die visuelle Fehlinformationen erzeugen können. Wir schlagen das erste Abwehrsystem \system gegen GAN-basierte Gesichtsmanipulationen vor, indem wir den Prozess der GAN-Inversion gefährden, der ein wesentlicher Schritt für die anschließende Gesichtsmanipulation ist. Alle Ergebnisse tragen dazu bei, dass die Community einen Einblick in die Datenschutzrisiken von maschinellen Lernmodellen erhält. Wir appellieren an die Gemeinschaft, eine eingehende Untersuchung der Risiken für die Privatsphäre, wie die unsere, im Hinblick auf die sich schnell entwickelnden Techniken des maschinellen Lernens in Betracht zu ziehen

    Location privacy policy management system

    Get PDF
    The advance in wireless communication and positioning systems has permitted development of a large variety of location-based services that, for example, can help people easily locate family members or find nearest gas station or restaurant. As location-based services become more and more popular, concerns are growing about the misuse of location information by malicious parties. In order to preserve location privacy, many efforts have been devoted to preventing service providers from determining users\u27 exact locations. Few works have sought to help users manage their privacy preferences; however management of privacy is an important issue in real applications. This work developed an easy-to-use location privacy management system. Specifically, it defines a succinct yet expressive location privacy policy constructs that can be easily understood by ordinary users. The system provides various policy management functions including policy composition, policy conflict detection, and policy recommendation. Policy composition allows users to insert and delete policies. Policy conflict detection will automatically check conflict among policies whenever there is any change. The policy recommendation system will generate recommended policies based on users\u27 basic requirements in order to reduce users\u27 burden. A system prototype has been implemented and evaluated in terms of both efficiency and effectiveness --Abstract, page iii
    • …
    corecore