99,334 research outputs found

    Contextual Linear Types for Differential Privacy

    Full text link
    Language support for differentially-private programming is both crucial and delicate. While elaborate program logics can be very expressive, type-system based approaches using linear types tend to be more lightweight and amenable to automatic checking and inference, and in particular in the presence of higher-order programming. Since the seminal design of Fuzz, which is restricted to ϵ\epsilon-differential privacy, a lot of effort has been made to support more advanced variants of differential privacy, like (ϵ\epsilon,δ\delta)-differential privacy. However, supporting these advanced privacy variants while also supporting higher-order programming in full has been proven to be challenging. We present Jazz, a language and type system which uses linear types and latent contextual effects to support both advanced variants of differential privacy and higher-order programming. Even when avoiding advanced variants and higher-order programming, our system achieves higher precision than prior work for a large class of programming patterns. We formalize the core of the Jazz language, prove it sound for privacy via a logical relation for metric preservation, and illustrate its expressive power through a number of case studies drawn from the recent differential privacy literature.Comment: Journal revisio

    Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures

    Full text link
    Differential privacy (DP) is considered to be an effective privacy-preservation method to secure the promising distributed machine learning (ML) paradigm-federated learning (FL) from privacy attacks (e.g., membership inference attack). Nevertheless, while the DP mechanism greatly alleviates privacy concerns, recent studies have shown that it can be exploited to conduct security attacks (e.g., false data injection attacks). To address such attacks on FL-based applications in critical infrastructures, in this paper, we perform the first systematic study on the DP-exploited poisoning attacks from an adversarial point of view. We demonstrate that the DP method, despite providing a level of privacy guarantee, can effectively open a new poisoning attack vector for the adversary. Our theoretical analysis and empirical evaluation of a smart grid dataset show the FL performance degradation (sub-optimal model generation) scenario due to the differential noise-exploited selective model poisoning attacks. As a countermeasure, we propose a reinforcement learning-based differential privacy level selection (rDP) process. The rDP process utilizes the differential privacy parameters (privacy loss, information leakage probability, etc.) and the losses to intelligently generate an optimal privacy level for the nodes. The evaluation shows the accumulated reward and errors of the proposed technique converge to an optimal privacy policy.Comment: 11 pages, 5 figures, 4 tables. This work has been submitted to IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    User's Privacy in Recommendation Systems Applying Online Social Network Data, A Survey and Taxonomy

    Full text link
    Recommender systems have become an integral part of many social networks and extract knowledge from a user's personal and sensitive data both explicitly, with the user's knowledge, and implicitly. This trend has created major privacy concerns as users are mostly unaware of what data and how much data is being used and how securely it is used. In this context, several works have been done to address privacy concerns for usage in online social network data and by recommender systems. This paper surveys the main privacy concerns, measurements and privacy-preserving techniques used in large-scale online social networks and recommender systems. It is based on historical works on security, privacy-preserving, statistical modeling, and datasets to provide an overview of the technical difficulties and problems associated with privacy preserving in online social networks.Comment: 26 pages, IET book chapter on big data recommender system

    Privacy-preserving and Privacy-attacking Approaches for Speech and Audio -- A Survey

    Full text link
    In contemporary society, voice-controlled devices, such as smartphones and home assistants, have become pervasive due to their advanced capabilities and functionality. The always-on nature of their microphones offers users the convenience of readily accessing these devices. However, recent research and events have revealed that such voice-controlled devices are prone to various forms of malicious attacks, hence making it a growing concern for both users and researchers to safeguard against such attacks. Despite the numerous studies that have investigated adversarial attacks and privacy preservation for images, a conclusive study of this nature has not been conducted for the audio domain. Therefore, this paper aims to examine existing approaches for privacy-preserving and privacy-attacking strategies for audio and speech. To achieve this goal, we classify the attack and defense scenarios into several categories and provide detailed analysis of each approach. We also interpret the dissimilarities between the various approaches, highlight their contributions, and examine their limitations. Our investigation reveals that voice-controlled devices based on neural networks are inherently susceptible to specific types of attacks. Although it is possible to enhance the robustness of such models to certain forms of attack, more sophisticated approaches are required to comprehensively safeguard user privacy

    A Review of Blockchain Technology Based Techniques to Preserve Privacy and to Secure for Electronic Health Records

    Get PDF
    Research has been done to broaden the block chain’s use cases outside of finance since Bitcoin introduced it. One sector where block chain is anticipated to have a big influence is healthcare. Researchers and practitioners in health informatics constantly struggle to keep up with the advancement of this field's new but quickly expanding body of research. This paper provides a thorough analysis of recent studies looking into the application of block chain based technology within the healthcare sector. Electronic health records (EHRs) are becoming a crucial tool for health care practitioners in achieving these objectives and providing high-quality treatment. Technology and regulatory barriers, such as concerns about results and privacy issues, make it difficult to use these technologies. Despite the fact that a variety of efforts have been introduced to focus on the specific privacy and security needs of future applications with functional parameters, there is still a need for research into the application, security and privacy complexities, and requirements of block chain based healthcare applications, as well as possible security threats and countermeasures. The primary objective of this article is to determine how to safeguard electronic health records (EHRs) using block chain technology in healthcare applications. It discusses contemporary HyperLedgerfabrics techniques, Interplanar file storage systems with block chain capabilities, privacy preservation techniques for EHRs, and recommender systems

    Preserving Co-Location Privacy in Geo-Social Networks

    Full text link
    The number of people on social networks has grown exponentially. Users share very large volumes of personal informations and content every days. This content could be tagged with geo-spatial and temporal coordinates that may be considered sensitive for some users. While there is clearly a demand for users to share this information with each other, there is also substantial demand for greater control over the conditions under which their information is shared. Content published in a geo-aware social networks (GeoSN) often involves multiple users and it is often accessible to multiple users, without the publisher being aware of the privacy preferences of those users. This makes difficult for GeoSN users to control which information about them is available and to whom it is available. Thus, the lack of means to protect users privacy scares people bothered about privacy issues. This paper addresses a particular privacy threats that occur in GeoSNs: the Co-location privacy threat. It concerns the availability of information about the presence of multiple users in a same locations at given times, against their will. The challenge addressed is that of supporting privacy while still enabling useful services.Comment: 10 pages, 5 figure

    SLIS Student Research Journal, Vol. 4, Iss. 1

    Get PDF

    Big Data Privacy Context: Literature Effects On Secure Informational Assets

    Get PDF
    This article's objective is the identification of research opportunities in the current big data privacy domain, evaluating literature effects on secure informational assets. Until now, no study has analyzed such relation. Its results can foster science, technologies and businesses. To achieve these objectives, a big data privacy Systematic Literature Review (SLR) is performed on the main scientific peer reviewed journals in Scopus database. Bibliometrics and text mining analysis complement the SLR. This study provides support to big data privacy researchers on: most and least researched themes, research novelty, most cited works and authors, themes evolution through time and many others. In addition, TOPSIS and VIKOR ranks were developed to evaluate literature effects versus informational assets indicators. Secure Internet Servers (SIS) was chosen as decision criteria. Results show that big data privacy literature is strongly focused on computational aspects. However, individuals, societies, organizations and governments face a technological change that has just started to be investigated, with growing concerns on law and regulation aspects. TOPSIS and VIKOR Ranks differed in several positions and the only consistent country between literature and SIS adoption is the United States. Countries in the lowest ranking positions represent future research opportunities.Comment: 21 pages, 9 figure

    Task-Agnostic Privacy-Preserving Representation Learning for Federated Learning Against Attribute Inference Attacks

    Full text link
    Federated learning (FL) has been widely studied recently due to its property to collaboratively train data from different devices without sharing the raw data. Nevertheless, recent studies show that an adversary can still be possible to infer private information about devices' data, e.g., sensitive attributes such as income, race, and sexual orientation. To mitigate the attribute inference attacks, various existing privacy-preserving FL methods can be adopted/adapted. However, all these existing methods have key limitations: they need to know the FL task in advance, or have intolerable computational overheads or utility losses, or do not have provable privacy guarantees. We address these issues and design a task-agnostic privacy-preserving presentation learning method for FL ({\bf TAPPFL}) against attribute inference attacks. TAPPFL is formulated via information theory. Specifically, TAPPFL has two mutual information goals, where one goal learns task-agnostic data representations that contain the least information about the private attribute in each device's data, and the other goal ensures the learnt data representations include as much information as possible about the device data to maintain FL utility. We also derive privacy guarantees of TAPPFL against worst-case attribute inference attacks, as well as the inherent tradeoff between utility preservation and privacy protection. Extensive results on multiple datasets and applications validate the effectiveness of TAPPFL to protect data privacy, maintain the FL utility, and be efficient as well. Experimental results also show that TAPPFL outperforms the existing defenses\footnote{Source code and full version: \url{https://github.com/TAPPFL}}.Comment: Accepted by AAAI 2024; Full versio
    corecore