7 research outputs found

    Practices and challenges in clinical data sharing

    Full text link
    The debate on data access and privacy is an ongoing one. It is kept alive by the never-ending changes/upgrades in (i) the shape of the data collected (in terms of size, diversity, sensitivity and quality), (ii) the laws governing data sharing, (iii) the amount of free public data available on individuals (social media, blogs, population-based databases, etc.), as well as (iv) the available privacy enhancing technologies. This paper identifies current directions, challenges and best practices in constructing a clinical data-sharing framework for research purposes. Specifically, we create a taxonomy for the framework, identify the design choices available within each taxon, and demonstrate thew choices using current legal frameworks. The purpose is to devise best practices for the implementation of an effective, safe and transparent research access framework

    Sociotechnical Safeguards for Genomic Data Privacy

    Get PDF
    Recent developments in a variety of sectors, including health care, research and the direct-to-consumer industry, have led to a dramatic increase in the amount of genomic data that are collected, used and shared. This state of affairs raises new and challenging concerns for personal privacy, both legally and technically. This Review appraises existing and emerging threats to genomic data privacy and discusses how well current legal frameworks and technical safeguards mitigate these concerns. It concludes with a discussion of remaining and emerging challenges and illustrates possible solutions that can balance protecting privacy and realizing the benefits that result from the sharing of genetic information

    Not So Private

    Get PDF
    Federal and state laws have long attempted to strike a balance between protecting patient privacy and health information confidentiality on the one hand and supporting important uses and disclosures of health information on the other. To this end, many health laws restrict the use and disclosure of identifiable health data but support the use and disclosure of de-identified data. The goal of health data de-identification is to prevent or minimize informational injuries to identifiable data subjects while allowing the production of aggregate statistics that can be used for biomedical and behavioral research, public health initiatives, informed health care decision making, and other important activities. Many federal and state laws assume that data are de-identified when direct and indirect demographic identifiers such as names, user names, email addresses, street addresses, and telephone numbers have been removed. An emerging reidentification literature shows, however, that purportedly de-identified data can—and increasingly will—be reidentified. This Article responds to this concern by presenting an original synthesis of illustrative federal and state identification and de-identification laws that expressly or potentially apply to health data; identifying significant weaknesses in these laws in light of the developing reidentification literature; proposing theoretical alternatives to outdated identification and de-identification standards, including alternatives based on the theories of evolving law, nonreidentification, non-collection, non-use, non-disclosure, and nondiscrimination; and offering specific, textual amendments to federal and state data protection laws that incorporate these theoretical alternatives

    Why We Fear Genetic Informants: Using Genetic Genealogy to Catch Serial Killers

    Get PDF
    Consumer genetics has exploded, driven by the second-most popular hobby in the United States: genealogy. This hobby has been co-opted by law enforcement to solve cold cases, by linking crime-scene DNA with the DNA of a suspect\u27s relative, which is contained in a direct-to-consumer (DTC) genetic database. The relative’s genetic data acts as a silent witness, or genetic informant, wordlessly guiding law enforcement to a handful of potential suspects. At least thirty murderers and rapists have been arrested in this way, a process which I describe in careful detail in this article. Legal scholars have sounded many alarms, and have called for immediate bans on this methodology, which is referred to as long-range familial searching ( LRFS ) or forensic genetic genealogy ( FGG ). The opponents’ concerns are many, but generally boil down to fears that FGG will invade the privacy and autonomy of presumptively innocent individuals. These concerns, I argue, are considerably overblown. Indeed, many aspects of the methodology implicate nothing new, legally or ethically, and might even better protect privacy while exonerating the innocent. Law enforcement’s use of FGG to solve cold cases is a bogeyman. The real threat to genetic privacy comes from shoddy consumer consent procedures, poor data security standards, and user agreements that permit rampant secondary uses of data. So why do so many legal scholars fear a world where law enforcement uses this methodology? I submit that our fear of so-called genetic informants stems from the sticky and long-standing traps of genetic essentialism and genetic determinism, where we incorrectly attribute intentional action to our genes and fear a world where humans are controlled by our biology. Rather than banning the use of genetic genealogy to catch serial killers and rapists, I call for improved DTC consent processes, and more transparent privacy and security measures. This will better protect genetic privacy in line with consumer expectations, while still permitting the use of LRFS to deliver justice to victims and punish those who commit society\u27s most heinous acts
    corecore