33,929 research outputs found

    Location Privacy and Inference in Online Social Networks

    Get PDF
    Data protection is about protecting information about per-sons, which is currently flowing without much control \u2013individuals can-not easily exercise the rights granted by the EU General Data Protection Regulation (GDPR). Individuals benefit from \u201cfree\u201d services offered by companies in exchange of their data, but these companies keep their users\u2019 data in \u201csilos\u201d that impede transparency on their use and possibilities of easy interactions. The introduction of the GDPR warrants control rights to individuals and the free portability of personal data from one entity to another. However it is still beyond the individual\u2019s capability to perceive whether their data is managed in compliance with GDPR. To this regard, in this work the proposed approach consists in using decentralized mechanisms to provide transparency through distributed ledgers, data flow governance by using smart contracts and interoperability relying on semantic web technologies

    PrivCheck: Privacy-Preserving Check-in Data Publishing for Personalized Location Based Services

    Get PDF
    International audienceWith the widespread adoption of smartphones, we have observed an increasing popularity of Location-Based Services (LBSs) in the past decade. To improve user experience, LBSs often provide personalized recommendations to users by mining their activity (i.e., check-in) data from location-based social networks. However, releasing user check-in data makes users vulnerable to inference attacks, as private data (e.g., gender) can often be inferred from the users'check-in data. In this paper, we propose PrivCheck, a customizable and continuous privacy-preserving check-in data publishing framework providing users with continuous privacy protection against inference attacks. The key idea of PrivCheck is to obfuscate user check-in data such that the privacy leakage of user-specified private data is minimized under a given data distortion budget, which ensures the utility of the obfuscated data to empower personalized LBSs. Since users often give LBS providers access to both their historical check-in data and future check-in streams, we develop two data obfuscation methods for historical and online check-in publishing, respectively. An empirical evaluation on two real-world datasets shows that our framework can efficiently provide effective and continuous protection of user-specified private data, while still preserving the utility of the obfuscated data for personalized LBS

    Protecting Privacy on Social Media: Is Consumer Privacy Self-Management Sufficient?

    Get PDF
    Among the existing solutions for protecting privacy on social media, a popular doctrine is privacy self-management, which asks users to directly control the sharing of their information through privacy settings. While most existing research focuses on whether a user makes informed and rational decisions on privacy settings, we address a novel yet important question of whether these settings are indeed effective in practice. Specifically, we conduct an observational study on the effect of the most prominent privacy setting on Twitter, the protected mode. Our results show that, even after setting an account to protected, real-world account owners still have private information continuously disclosed, mostly through tweets posted by the owner’s connections. This illustrates a fundamental limit of privacy self-management: its inability to control the peer-disclosure of privacy by an individual’s friends. Our results also point to a potential remedy: A comparative study before vs after an account became protected shows a substantial decrease of peer-disclosure in posts where the other users proactively mention the protected user, but no significant change when the other users are reacting to the protected user’s posts. In addition, peer-disclosure through explicit specification, such as the direct mentioning of a user’s location, decreases sharply, but no significant change occurs for implicit inference, such as the disclosure of birthday through the date of a “happy birthday” message. The design implication here is that online social networks should provide support alerting users of potential peer-disclosure through implicit inference, especially when a user is reacting to the activities of a user in the protected mode

    Enabling Social Applications via Decentralized Social Data Management

    Full text link
    An unprecedented information wealth produced by online social networks, further augmented by location/collocation data, is currently fragmented across different proprietary services. Combined, it can accurately represent the social world and enable novel socially-aware applications. We present Prometheus, a socially-aware peer-to-peer service that collects social information from multiple sources into a multigraph managed in a decentralized fashion on user-contributed nodes, and exposes it through an interface implementing non-trivial social inferences while complying with user-defined access policies. Simulations and experiments on PlanetLab with emulated application workloads show the system exhibits good end-to-end response time, low communication overhead and resilience to malicious attacks.Comment: 27 pages, single ACM column, 9 figures, accepted in Special Issue of Foundations of Social Computing, ACM Transactions on Internet Technolog
    • 

    corecore