8,055 research outputs found

    Development of the Lymphoma Enterprise Architecture Database: A caBIG(tm) Silver level compliant System

    Get PDF
    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Gridā„¢ (caBIGā„¢) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-systemā„¢ (LEADā„¢), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Instituteā€™s Center for Bioinformatics to establish the LEADā„¢ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEADā„¢ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIGā„¢ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIGā„¢ to the management of clinical and biological data

    Clinical trial metadata:Defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme

    Get PDF
    Background: Ā By 2011, the Health Technology Assessment (HTA) programme had published the results of over 100 trials with another 220 in progress. The aim of the project was to develop and pilot ā€˜metadataā€™ on clinical trials funded by the HTA programme. Ā  Objectives: The aim of the project was to develop and pilot questions describing clinical trials funded by the HTA programme in terms of it meeting the needs of the NHS with scientifically robust studies. The objectives were to develop relevant classification systems and definitions for use in answering relevant questions and to assess their utility. Ā  Data sources: Published monographs and internal HTA documents. Ā  Review methods: A database was developed, ā€˜populatedā€™ using retrospective data and used to answer questions under six prespecified themes. Questions were screened for feasibility in terms of data availability and/or ease of extraction. Answers were assessed by the authors in terms of completeness, success of the classification system used and resources required. Each question was scored to be retained, amended or dropped. Ā Ā  Results: One hundred and twenty-five randomised trials were included in the database from 109 monographs. Neither the International Standard Randomised Controlled Trial Number nor the term ā€˜randomised trialā€™ in the title proved a reliable way of identifying randomised trials. Only limited data were available on how the trials aimed to meet the needs of the NHS. Most trials were shown to follow their protocols but updates were often necessary as hardly any trials recruited as planned. Details were often lacking on planned statistical analyses, but we did not have access to the relevant statistical plans. Almost all the trials reported on cost-effectiveness, often in terms of both the primary outcome and quality-adjusted life-years. The cost of trials was shown to depend on the number of centres and the duration of the trial. Of the 78 questions explored, 61 were well answered, 33 fully with 28 requiring amendment were the analysis updated. The other 17 could not be answered with readily available data. Ā  Limitations: The study was limited by being confined to 125 randomised trials by one funder. Ā  Conclusions: Metadata on randomised controlled trials can be expanded to include aspects of design, performance, results and costs. The HTA programme should continue and extend the work reported here

    A model-driven method for the systematic literature review of qualitative empirical research

    Get PDF
    This paper explores a model-driven method for systematic literature reviews (SLRs), for use where the empirical studies found in the literature search are based on qualitative research. SLRs are an important component of the evidence-based practice (EBP) paradigm, which is receiving increasing attention in information systems (IS) but has not yet been widely-adopted. We illustrate the model-driven approach to SLRs via an example focused on the use of BPMN (Business Process Modelling Notation) in organizations. We discuss in detail the process followed in using the model-driven SLR method, and show how it is based on a hermeneutic cycle of reading and interpreting, in order to develop and refine a model which synthesizes the research findings of previous qualitative studies. This study can serve as an exemplar for other researchers wishing to carry out model-driven SLRs. We conclude with our reflections on the method and some suggestions for further researc

    Lifecycle information for e-literature: full report from the LIFE project

    Get PDF
    This Report is a record of the LIFE Project. The Project has been run for one year and its aim is to deliver crucial information about the cost and management of digital material. This information should then in turn be able to be applied to any institution that has an interest in preserving and providing access to electronic collections. The Project is a joint venture between The British Library and UCL Library Services. The Project is funded by JISC under programme area (i) as listed in paragraph 16 of the JISC 4/04 circular- Institutional Management Support and Collaboration and as such has set requirements and outcomes which must be met and the Project has done its best to do so. Where the Project has been unable to answer specific questions, strong recommendations have been made for future Project work to do so. The outcomes of this Project are expected to be a practical set of guidelines and a framework within which costs can be applied to digital collections in order to answer the following questions: ā€¢ What is the long term cost of preserving digital material; ā€¢ Who is going to do it; ā€¢ What are the long term costs for a library in HE/FE to partner with another institution to carry out long term archiving; ā€¢ What are the comparative long-term costs of a paper and digital copy of the same publication; ā€¢ At what point will there be sufficient confidence in the stability and maturity of digital preservation to switch from paper for publications available in parallel formats; ā€¢ What are the relative risks of digital versus paper archiving. The Project has attempted to answer these questions by using a developing lifecycle methodology and three diverse collections of digital content. The LIFE Project team chose UCL e-journals, BL Web Archiving and the BL VDEP digital collections to provide a strong challenge to the methodology as well as to help reach the key Project aim of attributing long term cost to digital collections. The results from the Case Studies and the Project findings are both surprising and illuminating

    RDA COVID-19 Guidelines and Recommendations

    Get PDF

    Systematizing FAIR research data management in biomedical research projects: a data life cycle approach

    Get PDF
    Biomedical researchers are facing data management challenges brought by a new generation of data driven by the advent of translational medicine research. These challenges are further complicated by the recent calls for data re-use and long-term stewardship spearheaded by the FAIR principles initiative. As a result, there is an increasingly wide-spread recognition that advancing biomedical science is becoming dependent on the application of data science to manage and utilize highly diverse and complex data in ways that give it context, meaning, and longevity beyond its initial purpose. However, current methods and practices in biomedical informatics remain to adopt a traditional linear view of the informatics process (collect, store and analyse); focusing primarily on the challenges in data integration and analysis, which are challenges only pertaining to a part of the overall life cycle of research data. The aim of this research is to facilitate the adoption and integration of data management practices into the research life cycle of biomedical projects, thus improving their capabilities into solving data management-related challenges that they face throughout the course of their research work. To achieve this aim, this thesis takes a data life cycle approach to define and develop a systematic methodology and framework towards the systematization of FAIR data management in biomedical research projects. The overarching contribution of this research is the provision of a data-state life cycle model for research data management in Biomedical Translational Research Projects. This model provides insight into the dynamics between 1) the purpose of a research-driven data use case, 2) the data requirements that renders data in a state fit for purpose, 3) the data management functions that prepare and act upon data and 4) the resulting state of data that is _t to serve the use case. This insight led to the development of a FAIR data management framework, which is another contribution of this thesis. This framework provides data managers the groundwork, including the data models, resources and capabilities, needed to build a FAIR data management environment to manage data during the operational stages of a biomedical research project. An exemplary implementation of this architecture (PlatformTM) was developed and validated by real-world research datasets produced by collaborative research programs funded by the Innovative Medicine Initiative (IMI) BioVacSafe 1 , eTRIKS 2 and FAIRplus 3.Open Acces

    Enhancing Traceability in Clinical Research Data Through an Information Product Framework

    Get PDF
    • ā€¦
    corecore