12,877 research outputs found
Clinical trial metadata:Defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme
Background: By 2011, the Health Technology Assessment (HTA) programme had published the results of over 100 trials with another 220 in progress. The aim of the project was to develop and pilot ‘metadata’ on clinical trials funded by the HTA programme. Objectives: The aim of the project was to develop and pilot questions describing clinical trials funded by the HTA programme in terms of it meeting the needs of the NHS with scientifically robust studies. The objectives were to develop relevant classification systems and definitions for use in answering relevant questions and to assess their utility. Data sources: Published monographs and internal HTA documents. Review methods: A database was developed, ‘populated’ using retrospective data and used to answer questions under six prespecified themes. Questions were screened for feasibility in terms of data availability and/or ease of extraction. Answers were assessed by the authors in terms of completeness, success of the classification system used and resources required. Each question was scored to be retained, amended or dropped. Results: One hundred and twenty-five randomised trials were included in the database from 109 monographs. Neither the International Standard Randomised Controlled Trial Number nor the term ‘randomised trial’ in the title proved a reliable way of identifying randomised trials. Only limited data were available on how the trials aimed to meet the needs of the NHS. Most trials were shown to follow their protocols but updates were often necessary as hardly any trials recruited as planned. Details were often lacking on planned statistical analyses, but we did not have access to the relevant statistical plans. Almost all the trials reported on cost-effectiveness, often in terms of both the primary outcome and quality-adjusted life-years. The cost of trials was shown to depend on the number of centres and the duration of the trial. Of the 78 questions explored, 61 were well answered, 33 fully with 28 requiring amendment were the analysis updated. The other 17 could not be answered with readily available data. Limitations: The study was limited by being confined to 125 randomised trials by one funder. Conclusions: Metadata on randomised controlled trials can be expanded to include aspects of design, performance, results and costs. The HTA programme should continue and extend the work reported here
ImmPort, toward repurposing of open access immunological assay data for translational and clinical research
Immunology researchers are beginning to explore the possibilities of reproducibility, reuse and secondary analyses of immunology data. Open-access datasets are being applied in the validation of the methods used in the original studies, leveraging studies for meta-analysis, or generating new hypotheses. To promote these goals, the ImmPort data repository was created for the broader research community to explore the wide spectrum of clinical and basic research data and associated findings. The ImmPort ecosystem consists of four components–Private Data, Shared Data, Data Analysis, and Resources—for data archiving, dissemination, analyses, and reuse. To date, more than 300 studies have been made freely available through the ImmPort Shared Data portal , which allows research data to be repurposed to accelerate the translation of new insights into discoveries
Joining up health and bioinformatics: e-science meets e-health
CLEF (Co-operative Clinical e-Science Framework) is an MRC sponsored project in the e-Science programme that aims to establish methodologies and a technical infrastructure forthe next generation of integrated clinical and bioscience research. It is developing methodsfor managing and using pseudonymised repositories of the long-term patient histories whichcan be linked to genetic, genomic information or used to support patient care. CLEF concentrateson removing key barriers to managing such repositories ? ethical issues, informationcapture, integration of disparate sources into coherent ?chronicles? of events, userorientedmechanisms for querying and displaying the information, and compiling the requiredknowledge resources. This paper describes the overall information flow and technicalapproach designed to meet these aims within a Grid framework
Recommended from our members
The Global academic research organization network: Data sharing to cure diseases and enable learning health systems.
Introduction:Global data sharing is essential. This is the premise of the Academic Research Organization (ARO) Council, which was initiated in Japan in 2013 and has since been expanding throughout Asia and into Europe and the United States. The volume of data is growing exponentially, providing not only challenges but also the clear opportunity to understand and treat diseases in ways not previously considered. Harnessing the knowledge within the data in a successful way can provide researchers and clinicians with new ideas for therapies while avoiding repeats of failed experiments. This knowledge transfer from research into clinical care is at the heart of a learning health system. Methods:The ARO Council wishes to form a worldwide complementary system for the benefit of all patients and investigators, catalyzing more efficient and innovative medical research processes. Thus, they have organized Global ARO Network Workshops to bring interested parties together, focusing on the aspects necessary to make such a global effort successful. One such workshop was held in Austin, Texas, in November 2017. Representatives from Japan, Taiwan, Singapore, Europe, and the United States reported on their efforts to encourage data sharing and to use research to inform care through learning health systems. Results:This experience report summarizes presentations and discussions at the Global ARO Network Workshop held in November 2017 in Austin, TX, with representatives from Japan, Korea, Singapore, Taiwan, Europe, and the United States. Themes and recommendations to progress their efforts are explored. Standardization and harmonization are at the heart of these discussions to enable data sharing. In addition, the transformation of clinical research processes through disruptive innovation, while ensuring integrity and ethics, will be key to achieving the ARO Council goal to overcome diseases such that people not only live longer but also are healthier and happier as they age. Conclusions:The achievement of global learning health systems will require further exploration, consensus-building, funding aligned with incentives for data sharing, standardization, harmonization, and actions that support global interests for the benefit of patients
Enabling quantitative data analysis through e-infrastructures
This paper discusses how quantitative data analysis in the social sciences can engage with and exploit an e-Infrastructure. We highlight how a number of activities which are central to quantitative data analysis, referred to as ‘data management’, can benefit from e-infrastructure support. We conclude by discussing how these issues are relevant to the DAMES (Data Management through e-Social Science) research Node, an ongoing project that aims to develop e-Infrastructural resources for quantitative data analysis in the social sciences
TumorML: Concept and requirements of an in silico cancer modelling markup language
This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification
An ontology to standardize research output of nutritional epidemiology : from paper-based standards to linked content
Background: The use of linked data in the Semantic Web is a promising approach to add value to nutrition research. An ontology, which defines the logical relationships between well-defined taxonomic terms, enables linking and harmonizing research output. To enable the description of domain-specific output in nutritional epidemiology, we propose the Ontology for Nutritional Epidemiology (ONE) according to authoritative guidance for nutritional epidemiology.
Methods: Firstly, a scoping review was conducted to identify existing ontology terms for reuse in ONE. Secondly, existing data standards and reporting guidelines for nutritional epidemiology were converted into an ontology. The terms used in the standards were summarized and listed separately in a taxonomic hierarchy. Thirdly, the ontologies of the nutritional epidemiologic standards, reporting guidelines, and the core concepts were gathered in ONE. Three case studies were included to illustrate potential applications: (i) annotation of existing manuscripts and data, (ii) ontology-based inference, and (iii) estimation of reporting completeness in a sample of nine manuscripts.
Results: Ontologies for food and nutrition (n = 37), disease and specific population (n = 100), data description (n = 21), research description (n = 35), and supplementary (meta) data description (n = 44) were reviewed and listed. ONE consists of 339 classes: 79 new classes to describe data and 24 new classes to describe the content of manuscripts.
Conclusion: ONE is a resource to automate data integration, searching, and browsing, and can be used to assess reporting completeness in nutritional epidemiology
Initial experiences in developing e-health solutions across Scotland
The MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project is a collaborative effort between e-Science, clinical and ethical research centres across the UK including the universities of Oxford, Glasgow, Imperial, Nottingham and Leicester. The project started in September 2005 and is due to run for 3 years. The primary goal of VOTES is to develop a reusable Grid framework through which a multitude of clinical trials and epidemiological studies can be supported. The National e-Science Centre (NeSC) at the University of Glasgow are looking at developing the Scottish components of this framework. This paper presents the initial experiences in developing this framework and in accessing and using existing data sets, services and software across the NHS in Scotland
Supporting security-oriented, inter-disciplinary research: crossing the social, clinical and geospatial domains
How many people have had a chronic disease for longer than 5-years in Scotland? How has this impacted upon their choices of employment? Are there any geographical clusters in Scotland where a high-incidence of patients with such long-term illness can be found? How does the life expectancy of such individuals compare with the national averages? Such questions are important to understand the health of nations and the best ways in which health care should be delivered and measured for their impact and success. In tackling such research questions, e-Infrastructures need to provide tailored, secure access to an extensible range of distributed resources including primary and secondary e-Health clinical data; social science data, and geospatial data sets amongst numerous others. In this paper we describe the security models underlying these e-Infrastructures and demonstrate their implementation in supporting secure, federated access to a variety of distributed and heterogeneous data sets exploiting the results of a variety of projects at the National e-Science Centre (NeSC) at the University of Glasgow
Informatics: the fuel for pharmacometric analysis
The current informal practice of pharmacometrics as a combination art and science makes it hard to appreciate the role that informatics can and should play in the future of the discipline and to comprehend the gaps that exist because of its absence. The development of pharmacometric informatics has important implications for expediting decision making and for improving the reliability of decisions made in model-based development. We argue that well-defined informatics for pharmacometrics can lead to much needed improvements in the efficiency, effectiveness, and reliability of the pharmacometrics process.
The purpose of this paper is to provide a description of the pervasive yet often poorly appreciated role of informatics in improving the process of data assembly, a critical task in the delivery of pharmacometric analysis results. First, we provide a brief description of the pharmacometric analysis process. Second, we describe the business processes required to create analysis-ready data sets for the pharmacometrician.
Third, we describe selected informatic elements required to support the pharmacometrics and data assembly processes. Finally, we offer specific suggestions for performing a systematic analysis of existing challenges as an approach to defi ning the next generation of pharmacometric informatics
- …