3,164 research outputs found

    Individual Privacy vs Population Privacy: Learning to Attack Anonymization

    Full text link
    Over the last decade there have been great strides made in developing techniques to compute functions privately. In particular, Differential Privacy gives strong promises about conclusions that can be drawn about an individual. In contrast, various syntactic methods for providing privacy (criteria such as kanonymity and l-diversity) have been criticized for still allowing private information of an individual to be inferred. In this report, we consider the ability of an attacker to use data meeting privacy definitions to build an accurate classifier. We demonstrate that even under Differential Privacy, such classifiers can be used to accurately infer "private" attributes in realistic data. We compare this to similar approaches for inferencebased attacks on other forms of anonymized data. We place these attacks on the same scale, and observe that the accuracy of inference of private attributes for Differentially Private data and l-diverse data can be quite similar

    A Formal Privacy Framework for Partially Private Data

    Full text link
    Despite its many useful theoretical properties, differential privacy (DP) has one substantial blind spot: any release that non-trivially depends on confidential data without additional privacy-preserving randomization fails to satisfy DP. Such a restriction is rarely met in practice, as most data releases under DP are actually "partially private" data (PPD). This poses a significant barrier to accounting for privacy risk and utility under logistical constraints imposed on data curators, especially those working with official statistics. In this paper, we propose a privacy definition which accommodates PPD and prove it maintains similar properties to standard DP. We derive optimal transport-based mechanisms for releasing PPD that satisfy our definition and algorithms for valid statistical inference using PPD, demonstrating their improved performance over post-processing methods. Finally, we apply these methods to a case study on US Census and CDC PPD to investigate private COVID-19 infection rates. In doing so, we show how data curators can use our framework to overcome barriers to operationalizing formal privacy while providing more transparency and accountability to users.Comment: 31 pages, 7 figure

    A Prototype For Learning Privacy-Preserving Data Publising

    Get PDF
    Erinevad organisatsioonid, valitsusasutused, firmad ja üksikisikud koguvad andmeid, mida on võimalik hiljem uute teadmiste saamiseks andmekaeve meetoditega töödelda. Töötlejaks ei tarvitse olla andmete koguja. Sageli ei ole teada andmetöötleja usaldusväärsus, mistõttu on oluline tagada, et avalikustatud andmetest poleks enam võimalik tagantjärgi privaatseid isikuandmeid identifitseerida. Selleks, et isikuid ei oleks enam võimalik identifitseerida, tuleb enne andmete töötlejatele väljastamist rakendada privaatsust säilitavaid meetodeid. Käesolevas lõputöös kirjeldatakse erinevaid ohte privaatsusele, meetodeid nende ohtude ennetamiseks, võrreldakse neid meetodeid omavahel ja kirjeldatakse erinevaid viise, kuidas andmeidanonümiseerida. Lõputöö teiseks väljundiks on õpitarkvara, mis võimaldabtudengitel antud valdkonnaga tutvuda. Lõputöö viimases osas valideeritakse loodud tarkvara.Our data gets collected every day by governments and different organizations for data mining. It is often not known who the receiving part of data is and whether data receiver can be trusted. Therefore it is necessary to anonymize data in a way what it would be not possible to identify persons from released data sets. This master thesis will discuss different threats to privacy, discuss and compare different privacy-preserving methods to mitigate these threats. The thesis will give an overview of different possible implementations for these privacy-preserving methods. The other output of this thesis is educational purpose software that allows students to learn and practice privacy-preserving methods. The final part of this thesis is a validation of designed software

    Patient Controlled, Privacy Preserving IoT Healthcare Data Sharing Framework

    Get PDF
    Healthcare data personally collected by individuals with wearable devices have become important sources of information for healthcare professionals and medical research worldwide. User-Generated Data (UGD) offers unique and sometimes fine-grained insight into the lived experiences and medical conditions of patients. The sensitive subject-matter of medical data can facilitate the exploitation and/or control of victims. Data collection in medical research therefore restricts access control over participant-data to the researchers. Therefore, cultivating trust with prospective participants concerned about the security of their medical data presents formidable challenges. Anonymization can allay such concerns, but at the cost of information loss. Moreover, such techniques cannot necessarily be applied on real-time streaming health data. In this paper, we aim to analyze the technical requirements to enable individuals to share their real-time wearable healthcare data with researchers without compromising privacy. An extension for delay-free anonymization techniques for real-time streaming health data is also proposed
    corecore