12,359 research outputs found

    Differential Privacy Techniques for Cyber Physical Systems: A Survey

    Full text link
    Modern cyber physical systems (CPSs) has widely being used in our daily lives because of development of information and communication technologies (ICT).With the provision of CPSs, the security and privacy threats associated to these systems are also increasing. Passive attacks are being used by intruders to get access to private information of CPSs. In order to make CPSs data more secure, certain privacy preservation strategies such as encryption, and k-anonymity have been presented in the past. However, with the advances in CPSs architecture, these techniques also needs certain modifications. Meanwhile, differential privacy emerged as an efficient technique to protect CPSs data privacy. In this paper, we present a comprehensive survey of differential privacy techniques for CPSs. In particular, we survey the application and implementation of differential privacy in four major applications of CPSs named as energy systems, transportation systems, healthcare and medical systems, and industrial Internet of things (IIoT). Furthermore, we present open issues, challenges, and future research direction for differential privacy techniques for CPSs. This survey can serve as basis for the development of modern differential privacy techniques to address various problems and data privacy scenarios of CPSs.Comment: 46 pages, 12 figure

    A Random Matrix Approach to Differential Privacy and Structure Preserved Social Network Graph Publishing

    Full text link
    Online social networks are being increasingly used for analyzing various societal phenomena such as epidemiology, information dissemination, marketing and sentiment flow. Popular analysis techniques such as clustering and influential node analysis, require the computation of eigenvectors of the real graph's adjacency matrix. Recent de-anonymization attacks on Netflix and AOL datasets show that an open access to such graphs pose privacy threats. Among the various privacy preserving models, Differential privacy provides the strongest privacy guarantees. In this paper we propose a privacy preserving mechanism for publishing social network graph data, which satisfies differential privacy guarantees by utilizing a combination of theory of random matrix and that of differential privacy. The key idea is to project each row of an adjacency matrix to a low dimensional space using the random projection approach and then perturb the projected matrix with random noise. We show that as compared to existing approaches for differential private approximation of eigenvectors, our approach is computationally efficient, preserves the utility and satisfies differential privacy. We evaluate our approach on social network graphs of Facebook, Live Journal and Pokec. The results show that even for high values of noise variance sigma=1 the clustering quality given by normalized mutual information gain is as low as 0.74. For influential node discovery, the propose approach is able to correctly recover 80 of the most influential nodes. We also compare our results with an approach presented in [43], which directly perturbs the eigenvector of the original data by a Laplacian noise. The results show that this approach requires a large random perturbation in order to preserve the differential privacy, which leads to a poor estimation of eigenvectors for large social networks

    Release Connection Fingerprints in Social Networks Using Personalized Differential Privacy

    Full text link
    In social networks, different users may have different privacy preferences and there are many users with public identities. Most work on differentially private social network data publication neglects this fact. We aim to release the number of public users that a private user connects to within n hops, called n-range Connection fingerprints(CFPs), under user-level personalized privacy preferences. We proposed two schemes, Distance-based exponential budget absorption (DEBA) and Distance-based uniformly budget absorption using Ladder function (DUBA-LF), for privacy-preserving publication of the CFPs based on Personalized differential privacy(PDP), and we conducted a theoretical analysis of the privacy guarantees provided within the proposed schemes. The implementation showed that the proposed schemes are superior in publication errors on real datasets.Comment: A short version of this paper is accepted for publication in Chinese Journal of Electronic

    Securing Social Media User Data - An Adversarial Approach

    Full text link
    Social media users generate tremendous amounts of data. To better serve users, it is required to share the user-related data among researchers, advertisers and application developers. Publishing such data would raise more concerns on user privacy. To encourage data sharing and mitigate user privacy concerns, a number of anonymization and de-anonymization algorithms have been developed to help protect privacy of social media users. In this work, we propose a new adversarial attack specialized for social media data. We further provide a principled way to assess effectiveness of anonymizing different aspects of social media data. Our work sheds light on new privacy risks in social media data due to innate heterogeneity of user-generated data which require striking balance between sharing user data and protecting user privacy.Comment: Published in the 29th ACM Conference on Hypertext and Social Media, Baltimore, MD, USA (HT-18

    Differentially Private Continual Release of Graph Statistics

    Full text link
    Motivated by understanding the dynamics of sensitive social networks over time, we consider the problem of continual release of statistics in a network that arrives online, while preserving privacy of its participants. For our privacy notion, we use differential privacy -- the gold standard in privacy for statistical data analysis. The main challenge in this problem is maintaining a good privacy-utility tradeoff; naive solutions that compose across time, as well as solutions suited to tabular data either lead to poor utility or do not directly apply. In this work, we show that if there is a publicly known upper bound on the maximum degree of any node in the entire network sequence, then we can release many common graph statistics such as degree distributions and subgraph counts continually with a better privacy-accuracy tradeoff. Code available at https://bitbucket.org/shs037/graphprivacycod

    Privacy-Preserving Collaborative Deep Learning with Unreliable Participants

    Full text link
    With powerful parallel computing GPUs and massive user data, neural-network-based deep learning can well exert its strong power in problem modeling and solving, and has archived great success in many applications such as image classification, speech recognition and machine translation etc. While deep learning has been increasingly popular, the problem of privacy leakage becomes more and more urgent. Given the fact that the training data may contain highly sensitive information, e.g., personal medical records, directly sharing them among the users (i.e., participants) or centrally storing them in one single location may pose a considerable threat to user privacy. In this paper, we present a practical privacy-preserving collaborative deep learning system that allows users to cooperatively build a collective deep learning model with data of all participants, without direct data sharing and central data storage. In our system, each participant trains a local model with their own data and only shares model parameters with the others. To further avoid potential privacy leakage from sharing model parameters, we use functional mechanism to perturb the objective function of the neural network in the training process to achieve ϵ\epsilon-differential privacy. In particular, for the first time, we consider the existence of~\textit{unreliable participants}, i.e., the participants with low-quality data, and propose a solution to reduce the impact of these participants while protecting their privacy. We evaluate the performance of our system on two well-known real-world datasets for regression and classification tasks. The results demonstrate that the proposed system is robust against unreliable participants, and achieves high accuracy close to the model trained in a traditional centralized manner while ensuring rigorous privacy protection

    Social Networks Research Aspects: A Vast and Fast Survey Focused on the Issue of Privacy in Social Network Sites

    Full text link
    The increasing participation of people in online activities in recent years like content publishing, and having different kinds of relationships and interactions, along with the emergence of online social networks and people's extensive tendency toward them, have resulted in generation and availability of a huge amount of valuable information that has never been available before, and have introduced some new, attractive, varied, and useful research areas to researchers. In this paper we try to review some of the accomplished research on information of SNSs (Social Network Sites), and introduce some of the attractive applications that analyzing this information has. This will lead to the introduction of some new research areas to researchers. By reviewing the research in this area we will present a categorization of research topics about online social networks. This categorization includes seventeen research subtopics or subareas that will be introduced along with some of the accomplished research in these subareas. According to the consequences (slight, significant, and sometimes catastrophic) that revelation of personal and private information has, a research area that researchers have vastly investigated is privacy in online social networks. After an overview on different research subareas of SNSs, we will get more focused on the subarea of privacy protection in social networks, and introduce different aspects of it along with a categorization of these aspects

    On the Computational Complexities of Three Privacy Measures for Large Networks Under Active Attack

    Full text link
    With the arrival of modern internet era, large public networks of various types have come to existence to benefit the society as a whole and several research areas such as sociology, economics and geography in particular. However, the societal and research benefits of these networks have also given rise to potentially significant privacy issues in the sense that malicious entities may violate the privacy of the users of such a network by analyzing the network and deliberately using such privacy violations for deleterious purposes. Such considerations have given rise to a new active research area that deals with the quantification of privacy of users in large networks and the corresponding investigation of computational complexity issues of computing such quantified privacy measures. In this paper, we formalize three such privacy measures for large networks and provide non-trivial theoretical computational complexity results for computing these measures. Our results show the first two measures can be computed efficiently, whereas the third measure is provably hard to compute within a logarithmic approximation factor. Furthermore, we also provide computational complexity results for the case when the privacy requirement of the network is severely restricted, including an efficient logarithmic approximation.Comment: 21 pages, 3 figure

    Privacy in Social Media: Identification, Mitigation and Applications

    Full text link
    The increasing popularity of social media has attracted a huge number of people to participate in numerous activities on a daily basis. This results in tremendous amounts of rich user-generated data. This data provides opportunities for researchers and service providers to study and better understand users' behaviors and further improve the quality of the personalized services. Publishing user-generated data risks exposing individuals' privacy. Users privacy in social media is an emerging task and has attracted increasing attention in recent years. These works study privacy issues in social media from the two different points of views: identification of vulnerabilities, and mitigation of privacy risks. Recent research has shown the vulnerability of user-generated data against the two general types of attacks, identity disclosure and attribute disclosure. These privacy issues mandate social media data publishers to protect users' privacy by sanitizing user-generated data before publishing it. Consequently, various protection techniques have been proposed to anonymize user-generated social media data. There is a vast literature on privacy of users in social media from many perspectives. In this survey, we review the key achievements of user privacy in social media. In particular, we review and compare the state-of-the-art algorithms in terms of the privacy leakage attacks and anonymization algorithms. We overview the privacy risks from different aspects of social media and categorize the relevant works into five groups 1) graph data anonymization and de-anonymization, 2) author identification, 3) profile attribute disclosure, 4) user location and privacy, and 5) recommender systems and privacy issues. We also discuss open problems and future research directions for user privacy issues in social media.Comment: This survey is currently under revie

    Protecting User Privacy: An Approach for Untraceable Web Browsing History and Unambiguous User Profiles

    Full text link
    The overturning of the Internet Privacy Rules by the Federal Communications Commissions (FCC) in late March 2017 allows Internet Service Providers (ISPs) to collect, share and sell their customers' Web browsing data without their consent. With third-party trackers embedded on Web pages, this new rule has put user privacy under more risk. The need arises for users on their own to protect their Web browsing history from any potential adversaries. Although some available solutions such as Tor, VPN, and HTTPS can help users conceal their online activities, their use can also significantly hamper personalized online services, i.e., degraded utility. In this paper, we design an effective Web browsing history anonymization scheme, PBooster, aiming to protect users' privacy while retaining the utility of their Web browsing history. The proposed model pollutes users' Web browsing history by automatically inferring how many and what links should be added to the history while addressing the utility-privacy trade-off challenge. We conduct experiments to validate the quality of the manipulated Web browsing history and examine the robustness of the proposed approach for user privacy protection.Comment: This paper is accepted in the 12th ACM International Conference on Web Search and Data Mining (WSDM-2019
    • …
    corecore