5 research outputs found
Intellectual Property, Open Science and Research Biobanks
In biomedical research and translational medicine, the ancient war between exclusivity (private control over information) and access to information is proposing again on a new battlefield: research biobanks. The latter are becoming increasingly important (one of the ten ideas changing the world, according to Time magazine) since they allow to collect, store and distribute in a secure and professional way a critical mass of human biological samples for research purposes. Tissues and related data are fundamental for the development of the biomedical research and the emerging field of translational medicine: they represent the “raw material” for every kind of biomedical study. For this reason, it is crucial to understand the boundaries of Intellectual Property (IP) in this prickly context. In fact, both data sharing and collaborative research have become an imperative in contemporary open science, whose development depends inextricably on: the opportunities to access and use data, the possibility of sharing practices between communities, the cross-checking of information and results and, chiefly, interactions with experts in different fields of knowledge. Data sharing allows both to spread the costs of analytical results that researchers cannot achieve working individually and, if properly managed, to avoid the duplication of research. These advantages are crucial: access to a common pool of pre-competitive data and the possibility to endorse follow-on research projects are fundamental for the progress of biomedicine. This is why the "open movement" is also spreading in the biobank's field. After an overview of the complex interactions among the different stakeholders involved in the process of information and data production, as well as of the main obstacles to the promotion of data sharing (i.e., the appropriability of biological samples and information, the privacy of participants, the lack of interoperability), we will firstly clarify some blurring in language, in particular concerning concepts often mixed up, such as “open source” and “open access”. The aim is to understand whether and to what extent we can apply these concepts to the biomedical field. Afterwards, adopting a comparative perspective, we will analyze the main features of the open models – in particular, the Open Research Data model – which have been proposed in literature for the promotion of data sharing in the field of research biobanks.
After such an analysis, we will suggest some recommendations in order to rebalance the clash between exclusivity - the paradigm characterizing the evolution of intellectual property over the last three centuries - and the actual needs for access to knowledge. We argue that the key factor in this balance may come from the right interaction between IP, social norms and contracts. In particular, we need to combine the incentives and the reward mechanisms characterizing scientific communities with data sharing imperative
Systematic analysis of challenge-driven improvements in molecular prognostic models for breast cancer
Although molecular prognostics in breast cancer are among the most successful examples of translating genomic analysis to clinical applications, optimal approaches to breast cancer clinical risk prediction remain controversial. The Sage Bionetworks-DREAM Breast Cancer Prognosis Challenge (BCC) is a crowdsourced research study for breast cancer prognostic modeling using genome-scale data. The BCC provided a community of data analysts with a common platform for data access and blinded evaluation of model accuracy in predicting breast cancer survival on the basis of gene expression data, copy number data, and clinical covariates. This approach offered the opportunity to assess whether a crowdsourced community Challenge would generate models of breast cancer prognosis commensurate with or exceeding current best-in-class approaches. The BCC comprised multiple rounds of blinded evaluations on held-out portions of data on 1981 patients, resulting in more than 1400 models submitted as open source code. Participants then retrained their models on the full data set of 1981 samples and submitted up to five models for validation in a newly generated data set of 184 breast cancer patients. Analysis of the BCC results suggests that the best-performing modeling strategy outperformed previously reported methods in blinded evaluations; model performance was consistent across several independent evaluations; and aggregating community-developed models achieved performance on par with the best-performing individual models. Copyright 2013 by the American Association for the Advancement of Science; all rights reserve
Systematic Analysis of Challenge-Driven Improvements in Molecular Prognostic Models for Breast Cancer
Although molecular prognostics in breast cancer are among the most successful examples of translating genomic analysis to clinical applications, optimal approaches to breast cancer clinical risk prediction remain controversial. The Sage Bionetworks–DREAM Breast Cancer Prognosis Challenge (BCC) is a crowdsourced research study for breast cancer prognostic modeling using genome-scale data. The BCC provided a community of data analysts with a common platform for data access and blinded evaluation of model accuracy in predicting breast cancer survival on the basis of gene expression data, copy number data, and clinical covariates. This approach offered the opportunity to assess whether a crowdsourced community Challenge would generate models of breast cancer prognosis commensurate with or exceeding current best-in-class approaches. The BCC comprised multiple rounds of blinded evaluations on held-out portions of data on 1981 patients, resulting in more than 1400 models submitted as open source code. Participants then retrained their models on the full data set of 1981 samples and submitted up to five models for validation in a newly generated data set of 184 breast cancer patients. Analysis of the BCC results suggests that the best-performing modeling strategy outperformed previously reported methods in blinded evaluations; model performance was consistent across several independent evaluations; and aggregating community-developed models achieved performance on par with the best-performing individual models