4,484 research outputs found

    Privacy in the Genomic Era

    Get PDF
    Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward

    Investigating the tension between cloud-related actors and individual privacy rights

    Get PDF
    Historically, little more than lip service has been paid to the rights of individuals to act to preserve their own privacy. Personal information is frequently exploited for commercial gain, often without the person’s knowledge or permission. New legislation, such as the EU General Data Protection Regulation Act, has acknowledged the need for legislative protection. This Act places the onus on service providers to preserve the confidentiality of their users’ and customers’ personal information, on pain of punitive fines for lapses. It accords special privileges to users, such as the right to be forgotten. This regulation has global jurisdiction covering the rights of any EU resident, worldwide. Assuring this legislated privacy protection presents a serious challenge, which is exacerbated in the cloud environment. A considerable number of actors are stakeholders in cloud ecosystems. Each has their own agenda and these are not necessarily well aligned. Cloud service providers, especially those offering social media services, are interested in growing their businesses and maximising revenue. There is a strong incentive for them to capitalise on their users’ personal information and usage information. Privacy is often the first victim. Here, we examine the tensions between the various cloud actors and propose a framework that could be used to ensure that privacy is preserved and respected in cloud systems

    DESIGN AND DEVELOPMENT OF KEY REPRESENTATION AUDITING SCHEME FOR SECURE ONLINE AND DYNAMIC STATISTICAL DATABASES

    Get PDF
    A statistical database (SDB) publishes statistical queries (such as sum, average, count, etc.) on subsets of records. Sometimes by stitching the answers of some statistics, a malicious user (snooper) may be able to deduce confidential information about some individuals. When a user submits a query to statistical database, the difficult problem is how to decide whether the query is answerable or not; to make a decision, past queries must be taken into account, which is called SDB auditing. One of the major drawbacks of the auditing, however, is its excessive CPU time and storage requirements to find and retrieve the relevant records from the SDB. The key representation auditing scheme (KRAS) is proposed to guarantee the security of online and dynamic SDBs. The core idea is to convert the original database into a key representation database (KRDB), also this scheme involves converting each new user query from a string representation into a key representation query (KRQ) and storing it in the Audit Query table (AQ table). Three audit stages are proposed to repel the attacks of the snooper to the confidentiality of the individuals. Also, efficient algorithms for these stages are presented, namely the First Stage Algorithm (FSA), the Second Stage Algorithm (SSA) and the Third Stage Algorithm (TSA). These algorithms enable the key representation auditor (KRA) to conveniently specify the illegal queries which could lead to disclosing the SDB. A comparative study is made between the new scheme and the existing methods, namely a cost estimation and a statistical analysis are performed, and it illustrates the savings in block accesses (CPU time) and storage space that are attainable when a KRDB is used. Finally, an implementation of the new scheme is performed and all the components of the proposed system are discussed

    Exploring the relationships between privacy by design schemes and privacy laws: a comparative analysis

    Get PDF
    Internet of Things (IoT) applications have the potential to derive sensitive information about individuals. Therefore, developers must exercise due diligence to make sure that data are managed according to the privacy regulations and data protection laws. However, doing so can be a difficult and challenging task. Recent research has revealed that developers typically face difficulties when complying with regulations. One key reason is that, at times, regulations are vague, and could be challenging to extract and enact such legal requirements. In our research paper, we have conducted a systematic analysis of the data protection laws that are used across different continents, namely: (i) General Data Protection Regulations (GDPR), (ii) the Personal Information Protection and Electronic Documents Act (PIPEDA), (iii) the California Consumer Privacy Act (CCPA), (iv) Australian Privacy Principles (APPs), and (v) New Zealand’s Privacy Act 1993. In this technical report, we presented the detailed results of the conducted framework analysis method to attain a comprehensive view of different data protection laws and highlighted the disparities, in order to assist developers in adhering to the regulations across different regions, along with creating a Combined Privacy Law Framework (CPLF). After that, we gave an overview of various Privacy by Design (PbD) schemes developed previously by different researchers. Then, the key principles and individuals’ rights of the CPLF were mapped with the privacy principles, strategies, guidelines, and patterns of the Privacy by Design (PbD) schemes in order to investigate the gaps in existing schemes
    • …
    corecore