1,347 research outputs found

    Supporting Regularized Logistic Regression Privately and Efficiently

    Full text link
    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Increasing concerns over data privacy make it more and more difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used machine learning model in various disciplines while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluation on several studies validated the privacy guarantees, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc

    Privacy in the Genomic Era

    Get PDF
    Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward

    Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies Perspective

    Full text link
    Rapid advances in human genomics are enabling researchers to gain a better understanding of the role of the genome in our health and well-being, stimulating hope for more effective and cost efficient healthcare. However, this also prompts a number of security and privacy concerns stemming from the distinctive characteristics of genomic data. To address them, a new research community has emerged and produced a large number of publications and initiatives. In this paper, we rely on a structured methodology to contextualize and provide a critical analysis of the current knowledge on privacy-enhancing technologies used for testing, storing, and sharing genomic data, using a representative sample of the work published in the past decade. We identify and discuss limitations, technical challenges, and issues faced by the community, focusing in particular on those that are inherently tied to the nature of the problem and are harder for the community alone to address. Finally, we report on the importance and difficulty of the identified challenges based on an online survey of genome data privacy expertsComment: To appear in the Proceedings on Privacy Enhancing Technologies (PoPETs), Vol. 2019, Issue

    Turvalisel ühisarvutusel põhinev privaatsust säilitav statistiline analüüs

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsioone.Kaasaegses ühiskonnas luuakse inimese kohta digitaalne kirje kohe pärast tema sündi. Sellest hetkest alates jälgitakse tema käitumist ning kogutakse andmeid erinevate eluvaldkondade kohta. Kui kasutate poes kliendikaarti, käite arsti juures, täidate maksudeklaratsiooni või liigute lihtsalt ringi mobiiltelefoni taskus kandes, koguvad ning salvestavad firmad ja riigiasutused teie tundlikke andmeid. Vahel anname selliseks jälitustegevuseks vabatahtlikult loa, et saada mingit kasu. Näiteks võime saada soodustust, kui kasutame kliendikaarti. Teinekord on meil vaja teha keeruline otsus, kas loobuda võimalusest teha mobiiltelefonikõnesid või lubada enda jälgimine mobiilimastide kaudu edastatava info abil. Riigiasutused haldavad infot meie tervise, hariduse ja sissetulekute kohta, et meid paremini ravida, harida ja meilt makse koguda. Me loodame, et meie andmeid kasutatakse mõistlikult, aga samas eeldame, et meie privaatsus on tagatud. Käesolev töö uurib, kuidas teostada statistilist analüüsi nii, et tagada üksikisiku privaatsus. Selle eesmärgi saavutamiseks kasutame turvalist ühisarvutust. See krüptograafiline meetod lubab analüüsida andmeid nii, et üksikuid väärtuseid ei ole kunagi võimalik näha. Hoolimata sellest, et turvalise ühisarvutuse kasutamine on aeganõudev protsess, näitame, et see on piisavalt kiire ja seda on võimalik kasutada isegi väga suurte andmemahtude puhul. Me oleme teinud võimalikuks populaarseimate statistilise analüüsi meetodite kasutamise turvalise ühisarvutuse kontekstis. Me tutvustame privaatsust säilitavat statistilise analüüsi tööriista Rmind, mis sisaldab kõiki töö käigus loodud funktsioone. Rmind sarnaneb tööriistadele, millega statistikud on harjunud. See lubab neil viia läbi uuringuid ilma, et nad peaksid üksikasjalikult tundma allolevaid krüptograafilisi protokolle. Kasutame dissertatsioonis kirjeldatud meetodeid, et valmistada ette statistiline uuring, mis ühendab kaht Eesti riiklikku andmekogu. Uuringu eesmärk on teada saada, kas Eesti tudengid, kes töötavad ülikooliõpingute ajal, lõpetavad nominaalajaga väiksema tõenäosusega kui nende õpingutele keskenduvad kaaslased.In a modern society, from the moment a person is born, a digital record is created. From there on, the person’s behaviour is constantly tracked and data are collected about the different aspects of his or her life. Whether one is swiping a customer loyalty card in a store, going to the doctor, doing taxes or simply moving around with a mobile phone in one’s pocket, sensitive data are being gathered and stored by governments and companies. Sometimes, we give our permission for this kind of surveillance for some benefit. For instance, we could get a discount using a customer loyalty card. Other times we have a difficult choice – either we cannot make phone calls or our movements are tracked based on cellular data. The government tracks information about our health, education and income to cure us, educate us and collect taxes. We hope that the data are used in a meaningful way, however, we also have an expectation of privacy. This work focuses on how to perform statistical analyses in a way that preserves the privacy of the individual. To achieve this goal, we use secure multi-­‐party computation. This cryptographic technique allows data to be analysed without seeing the individual values. Even though using secure multi-­‐party computation is a time-­‐consuming process, we show that it is feasible even for large-­‐scale databases. We have developed ways for using the most popular statistical analysis methods with secure multi-­‐party computation. We introduce a privacy-­‐preserving statistical analysis tool called Rmind that contains all of our resulting implementations. Rmind is similar to tools that statistical analysts are used to. This allows them to carry out studies on the data without having to know the details of the underlying cryptographic protocols. The methods described in the thesis are used in practice to prepare for running a statistical study on large-­‐scale real-­‐life data to find out whether Estonian students who are working during university studies are less likely to graduate in nominal time
    corecore