2,601 research outputs found

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    Recovering complete and draft population genomes from metagenome datasets.

    Get PDF
    Assembly of metagenomic sequence data into microbial genomes is of fundamental value to improving our understanding of microbial ecology and metabolism by elucidating the functional potential of hard-to-culture microorganisms. Here, we provide a synthesis of available methods to bin metagenomic contigs into species-level groups and highlight how genetic diversity, sequencing depth, and coverage influence binning success. Despite the computational cost on application to deeply sequenced complex metagenomes (e.g., soil), covarying patterns of contig coverage across multiple datasets significantly improves the binning process. We also discuss and compare current genome validation methods and reveal how these methods tackle the problem of chimeric genome bins i.e., sequences from multiple species. Finally, we explore how population genome assembly can be used to uncover biogeographic trends and to characterize the effect of in situ functional constraints on the genome-wide evolution

    Big Data technology

    Get PDF
    Big Data must be processed with advanced collection and analysis tools, based on predetermined algorithms, in order to obtain relevant information. Algorithms must also take into account invisible aspects for direct perceptions. Big Data issues is multi-layered. A distributed parallel architecture distributes data on multiple servers (parallel execution environments) thus dramatically improving data processing speeds. Big Data provides an infrastructure that allows for highlighting uncertainties, performance, and availability of components. DOI: 10.13140/RG.2.2.12784.0000

    La bioética en el época del ‘big data’: la salud y más allá

    Get PDF
    La ciència del ‘big data’ (o dades massives) comporta un enorme potencial per a la recerca biomèdica, i promet ocasionar una gran transformació en l’àmbit de la salut i l'assistència mèdica. Al mateix temps, l'ús de dades de salut en recerca presenta diversos reptes ètics. En aquest article, analitzaré els aspectes ètics de l'arribada del ‘big data’ a l'àmbit de la salut. Encara que el discurs públic i regulador s'ha focalitzat principalment en l'ús de les dades personals, bregar amb els nous desafiaments que comporten la irrupció de les dades massives requereix enfocaments alternatius a l'ètica de la recerca, com ara el model del “contracte social”. A més, cal pensar més enllà de l'ús de dades per a recerques en salut i tenir en compte les conseqüències socials de l'epistemologia i la pràctica del ‘big data’ i les implicacions morals de la ‘datificació’ d’allò que és humà.‘Big data’ and data-intensive research approaches are rapidly gaining momentum in health and biomedical research, with potential to transform health at all levels from personal to public. The use of ‘big data’ for health research, however, raises a number of ethical challenges. In this paper I discuss ethical aspects of the advent of big data in health. I argue that although public discourse has focused on immediate concerns relating to use of individuals’ information, ‘big health data’ requires us to explore alternative conceptual approaches to research ethics, including the ‘social contract’ model. Further, we need to think beyond health research uses of data to the social consequences of big data epistemology and practice, and the moral implications of ‘datafying’ the human.La ciencia de ‘big data’ (o datos masivos) lleva mucho potencial para la investigación biomédica, y promete una transformación en la salud y la asistencia médica. Al mismo tiempo, el uso de datos de salud en investigación presenta varios retos éticos. En este artículo, exploraré aspectos éticos de la llegada del ‘big data’ al ámbito de la salud. Aunque el discurso público y regulatorio se ha focalizado mucho en el uso de datos del individuo, lidiar con los nuevos desafíos de datos masivos requiere considerar enfoques alternativos a la ética de la investigación, tal como el modelo del “contrato social”. Hay que pensar más allá del uso de datos para investigaciones en salud y contemplar las consecuencias sociales de la epistemología y la práctica de ‘big data’ y las implicancias morales de la ‘datificación’ del humano

    Regulating Black-Box Medicine

    Get PDF
    Data drive modern medicine. And our tools to analyze those data are growing ever more powerful. As health data are collected in greater and greater amounts, sophisticated algorithms based on those data can drive medical innovation, improve the process of care, and increase efficiency. Those algorithms, however, vary widely in quality. Some are accurate and powerful, while others may be riddled with errors or based on faulty science. When an opaque algorithm recommends an insulin dose to a diabetic patient, how do we know that dose is correct? Patients, providers, and insurers face substantial difficulties in identifying high-quality algorithms; they lack both expertise and proprietary information. How should we ensure that medical algorithms are safe and effective? Medical algorithms need regulatory oversight, but that oversight must be appropriately tailored. Unfortunately, the Food and Drug Administration (FDA) has suggested that it will regulate algorithms under its traditional framework, a relatively rigid system that is likely to stifle innovation and to block the development of more flexible, current algorithms. This Article draws upon ideas from the new governance movement to suggest a different path. FDA should pursue a more adaptive regulatory approach with requirements that developers disclose information underlying their algorithms. Disclosure would allow FDA oversight to be supplemented with evaluation by providers, hospitals, and insurers. This collaborative approach would supplement the agency’s review with ongoing real-world feedback from sophisticated market actors. Medical algorithms have tremendous potential, but ensuring that such potential is developed in high-quality ways demands a careful balancing between public and private oversight, and a role for FDA that mediates—but does not dominate—the rapidly developing industry

    Infrastructuring educational genomics:Associations, architectures and apparatuses

    Get PDF
    Technoscientific transformations in molecular genomics have begun to influence knowledge production in education. Interdisciplinary scientific consortia are seeking to identify ‘genetic influences’ on ‘educationally relevant’ traits, behaviors, and outcomes. This article examines the emerging ‘knowledge infrastructure’ of educational genomics, attending to the assembly and choreography of organizational associations, epistemic architecture, and technoscientific apparatuses implicated in the generation of genomic understandings from masses of bioinformation. As an infrastructure of datafied knowledge production, educational genomics is embedded in data-centered epistemologies and practices which recast educational problems in terms of molecular genetic associations—insights about which are deemed discoverable from digital bioinformation and potentially open to genetically informed interventions in policy and practice. While scientists claim to be ‘opening the black box of the genome’ and its association with educational outcomes, we open the black box of educational genomics itself as a source of emerging scientific authority. Data-intensive educational genomics does not straightforwardly ‘discover’ the biological bases of educationally relevant behaviors and outcomes. Rather, this knowledge infrastructure is also an experimental ‘ontological infrastructure’ supporting particular ways of knowing, understanding, explaining, and intervening in education, and recasting the human subjects of education as being surveyable and predictable through the algorithmic processing of bioinformation

    Artificial intelligence: opportunities and implications for the future of decision making

    Get PDF
    Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions. This report sets out some possible approaches, and describes some of the ways government is already engaging with these issues

    Big Data Ethics in Research

    Get PDF
    The main problems faced by scientists in working with Big Data sets, highlighting the main ethical issues, taking into account the legislation of the European Union. After a brief Introduction to Big Data, the Technology section presents specific research applications. There is an approach to the main philosophical issues in Philosophical Aspects, and Legal Aspects with specific ethical issues in the EU Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Data Protection Directive - General Data Protection Regulation, "GDPR"). The Ethics Issues section details the specific aspects of Big Data. After a brief section of Big Data Research, I finalize my work with the presentation of Conclusions on research ethics in working with Big Data. CONTENTS: Abstract 1. Introduction - 1.1 Definitions - 1.2 Big Data dimensions 2. Technology - 2.1 Applications - - 2.1.1 In research 3. Philosophical aspects 4. Legal aspects - 4.1 GDPR - - Stages of processing of personal data - - Principles of data processing - - Privacy policy and transparency - - Purposes of data processing - - Design and implicit confidentiality - - The (legal) paradox of Big Data 5. Ethical issues - Ethics in research - Awareness - Consent - Control - Transparency - Trust - Ownership - Surveillance and security - Digital identity - Tailored reality - De-identification - Digital inequality - Privacy 6. Big Data research Conclusions Bibliography DOI: 10.13140/RG.2.2.11054.4640
    corecore