13,171 research outputs found

    Context and Semantic Aware Location Privacy

    Get PDF
    With ever-increasing computational power, and improved sensing and communication capabilities, smart devices have altered and enhanced the way we process, perceive and interact with information. Personal and contextual data is tracked and stored extensively on these devices and, oftentimes, ubiquitously sent to online service providers. This routine is proving to be quite privacy-invasive, since these service providers mine the data they collect in order to infer more and more personal information about users. Protecting privacy in the rise of mobile applications is a critical challenge. The continuous tracking of users with location- and time-stamps expose their private lives at an alarming level. Location traces can be used to infer intimate aspects of users' lives such as interests, political orientation, religious beliefs, and even more. Traditional approaches to protecting privacy fail to meet users' expectations due to simplistic adversary models and the lack of a multi-dimensional awareness. In this thesis, the development of privacy-protection approaches is pushed further by (i) adapting to concrete adversary capabilities and (ii) investigating the threat of strong adversaries that exploit location semantics. We first study user mobility and spatio-temporal correlations in continuous disclosure scenarios (e.g., sensing applications), where the more frequently a user discloses her location, the more difficult it becomes to protect. To counter this threat, we develop adversary- and mobility-aware privacy protection mechanisms that aim to minimize an adversary's exploitation of user mobility. We demonstrate that a privacy protection mechanism must actively evaluate privacy risks in order to adapt its protection parameters. We further develop an Android library that provides on-device location privacy evaluation and enables any location-based application to support privacy-preserving services. We also implement an adversary-aware protection mechanism in this library with semantic-based privacy settings. Furthermore, we study the effects of an adversary that exploits location semantics in order to strengthen his attacks on user traces. Such extensive information is available to an adversary via maps of points of interest, but also from users themselves. Typically, users of online social networks want to announce their whereabouts to their circles. They do so mostly, if not always, by sharing the type of their location along with the geographical coordinates. We formalize this setting and by using Bayesian inference show that if location semantics of traces is disclosed, users' privacy levels drop considerably. Moreover, we study the time-of-day information and its relation to location semantics. We reveal that an adversary can breach privacy further by exploiting time-dependency of semantics. We implement and evaluate a sensitivity-aware protection mechanism in this setting as well. The battle for privacy requires social awareness and will to win. However, the slow progress on the front of law and regulations pushes the need for technological solutions. This thesis concludes that we have a long way to cover in order to establish privacy-enhancing technologies in our age of information. Our findings opens up new venues for a more expeditious understanding of privacy risks and thus their prevention

    ABAKA : a novel attribute-based k-anonymous collaborative solution for LBSs

    Get PDF
    The increasing use of mobile devices, along with advances in telecommunication systems, increased the popularity of Location-Based Services (LBSs). In LBSs, users share their exact location with a potentially untrusted Location-Based Service Provider (LBSP). In such a scenario, user privacy becomes a major con- cern: the knowledge about user location may lead to her identification as well as a continuous tracing of her position. Researchers proposed several approaches to preserve users’ location privacy. They also showed that hiding the location of an LBS user is not enough to guarantee her privacy, i.e., user’s pro- file attributes or background knowledge of an attacker may reveal the user’s identity. In this paper we propose ABAKA, a novel collaborative approach that provides identity privacy for LBS users considering users’ profile attributes. In particular, our solution guarantees p -sensitive k -anonymity for the user that sends an LBS request to the LBSP. ABAKA computes a cloaked area by collaborative multi-hop forwarding of the LBS query, and using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). We ran a thorough set of experiments to evaluate our solution: the results confirm the feasibility and efficiency of our proposal

    Constructing elastic distinguishability metrics for location privacy

    Full text link
    With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a "geographic fence" that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris' wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy

    Exploring personalized life cycle policies

    Get PDF
    Ambient Intelligence imposes many challenges in protecting people's privacy. Storing privacy-sensitive data permanently will inevitably result in privacy violations. Limited retention techniques might prove useful in order to limit the risks of unwanted and irreversible disclosure of privacy-sensitive data. To overcome the rigidness of simple limited retention policies, Life-Cycle policies more precisely describe when and how data could be first degraded and finally be destroyed. This allows users themselves to determine an adequate compromise between privacy and data retention. However, implementing and enforcing these policies is a difficult problem. Traditional databases are not designed or optimized for deleting data. In this report, we recall the formerly introduced life cycle policy model and the already developed techniques for handling a single collective policy for all data in a relational database management system. We identify the problems raised by loosening this single policy constraint and propose preliminary techniques for concurrently handling multiple policies in one data store. The main technical consequence for the storage structure is, that when allowing multiple policies, the degradation order of tuples will not always be equal to the insert order anymore. Apart from the technical aspects, we show that personalizing the policies introduces some inference breaches which have to be further investigated. To make such an investigation possible, we introduce a metric for privacy, which enables the possibility to compare the provided amount of privacy with the amount of privacy required by the policy

    PRIVED: a Privacy Model for Heterogeneous Mobile Networks

    Get PDF
    We propose a network-oriented privacy model (PRIVED) composed by a well deïŹned information model, using events, information sets and relationships to deïŹne the conceptual privacy relationships that can occur in the network. We propose formal rules and a network instantiation, using linkability and correlation as the main tools for network applicability. We also use the model to determine the best approaches towards privacy protection in the network, resulting in a vertical/horizontal network privacy dichotomy

    Policy-agnostic programming on the client-side

    Get PDF
    Browser security has become a major concern especially due to web pages becoming more complex. These web applications handle a lot of information, including sensitive data that may be vulnerable to attacks like data exfiltration, cross-site scripting (XSS), etc. Most modern browsers have security mechanisms in place to prevent such attacks but they still fall short in preventing more advanced attacks like evolved variants of data exfiltration. Moreover, there is no standard that is followed to implement security into the browser. A lot of research has been done in the field of information flow security that could prove to be helpful in solving the problem of securing the client-side. Policy- agnostic programming is a programming paradigm that aims to make implementation of information flow security in real world systems more flexible. In this paper, we explore the use of policy-agnostic programming on the client-side and how it will help prevent common client-side attacks. We verify our results through a client-side salary management application. We show a possible attack and how our solution would prevent such an attack
    • 

    corecore