10 research outputs found

    From Inception to ConcePTION: Genesis of a Network to Support Better Monitoring and Communication of Medication Safety During Pregnancy and Breastfeeding

    Get PDF
    In 2019, the Innovative Medicines Initiative (IMI) funded the ConcePTION project—Building an ecosystem for better monitoring and communicating safety of medicines use in pregnancy and breastfeeding: validated and regulatory endorsed workflows for fast, optimised evidence generation—with the vision that there is a societal obligation to rapidly reduce uncertainty about the safety of medication use in pregnancy and breastfeeding. The present paper introduces the set of concepts used to describe the European data sources involved in the ConcePTION project and illustrates the ConcePTION Common Data Model (CDM), which serves as the keystone of the federated ConcePTION network. Based on data availability and content analysis of 21 European data sources, the ConcePTION CDM has been structured with six tables designed to capture data from routine healthcare, three tables for data from public health surveillance activities, three curated tables for derived data on population (e.g., observation time and mother-child linkage), plus four metadata tables. By its first anniversary, the ConcePTION CDM has enabled 13 data sources to run common scripts to contribute to major European projects, demonstrating its capacity to facilitate effective and transparent deployment of distributed analytics, and its potential to address questions about utilization, effectiveness, and safety of medicines in special populations, including during pregnancy and breastfeeding, and, more broadly, in the general population

    Integration of i2b2 into the Greifswald University Hospital Research Platform

    Get PDF
    The Greifswald University Hospital in Germany conducts a research project called "Greifswald Approach to Individualized Medicine (GANI_MED)", which aims at improving patient care through personalized medicine. As a result of this project, there are multiple regional patient cohorts set up for different common diseases. The collected data of these cohorts will act as a resource for epidemiological research. Researchers are going to get the possibility to use this data for their study, by utilizing a variety of different descriptive metadata attributes. The actual medical datasets of the patients are integrated from multiple clinical information systems and medical devices. Yet, at this point in the process of defining a research query, researchers do not have proper tools to query for existing patient data. There are no tools available which offer a metadata catalogue that is linked to observational data, which would allow convenient research. Instead, researchers have to issue an application for selected variables that fit the conditions of their study, and wait for the results. That leaves the researchers not knowing in advance, whether there are enough (or any) patients fitting the specified inclusion and exclusion criteria. The "Informatics for Integrating Biology and the Bedside (i2b2)" framework has been assessed and implemented as a prototypical evaluation instance for solving this issue. i2b2 will be set up at the Institute for Community Medicine (ICM) at Greifswald, in order to act as a preliminary query tool for researchers. As a result, the development of a research data import routine and customizations of the i2b2 webclient were successfully performed. An important part of the solution is, that the metadata import can adapt to changes in the metadata. New metadata items can be added without changing the import program. The results of this work are discussed and a further outlook is described in this thesis

    From Data to Knowledge in Secondary Health Care Databases

    Get PDF
    The advent of big data in health care is a topic receiving increasing attention worldwide. In the UK, over the last decade, the National Health Service (NHS) programme for Information Technology has boosted big data by introducing electronic infrastructures in hospitals and GP practices across the country. This ever growing amount of data promises to expand our understanding of the services, processes and research. Potential bene�ts include reducing costs, optimisation of services, knowledge discovery, and patient-centred predictive modelling. This thesis will explore the above by studying over ten years worth of electronic data and systems in a hospital treating over 750 thousand patients a year. The hospital's information systems store routinely collected data, used primarily by health practitioners to support and improve patient care. This raw data is recorded on several di�erent systems but rarely linked or analysed. This thesis explores the secondary uses of such data by undertaking two case studies, one on prostate cancer and another on stroke. The journey from data to knowledge is made in each of the studies by traversing critical steps: data retrieval, linkage, integration, preparation, mining and analysis. Throughout, novel methods and computational techniques are introduced and the value of routinely collected data is assessed. In particular, this thesis discusses in detail the methodological aspects of developing clinical data warehouses from routine heterogeneous data and it introduces methods to model, visualise and analyse the journeys that patients take through care. This work has provided lessons in hospital IT provision, integration, visualisation and analytics of complex electronic patient records and databases and has enabled the use of raw routine data for management decision making and clinical research in both case studies

    A Metadata-Driven Approach to Panel Data Management and its Application in DDI on Rails

    Get PDF
    This dissertation designs a metadata-driven infrastructure for panel data that aims to increase both the quality and the usability of the resulting research data. Data quality determines whether the data appropriately represent a particular aspect of our reality. Usability originates notably from a conceivable documentation, accessibility of the data, and interoperability with tools and other data sources. In a metadata-driven infrastructure, metadata are prepared before the digital objects and process steps that they describe. This enables data providers to utilize metadata for many purposes, including process control and data validation. Furthermore, a metadata-driven design reduces the overall costs of data production and facilitates the reuse of both data and metadata. The main use case is the German Socio-Economic Panel (SOEP), but the results claim to be re-usable for other panel studies. The introduction of the Generic Longitudinal Business Process Model (GLBPM) and a general discussion of digital objects managed by panel studies provide a generic framework for the development of a metadata-driven infrastructure for panel studies. A first theoretical application presents two designs for variable linkage to support record linkage and statistical matching with structured metadata: concepts for omnidirectional relations and process models for unidirectional relations. Furthermore, a reference architecture for a metadata-driven infrastructure is designed and implemented. This provides a proof of concept for the previous discussion and an environment for the development of DDI on Rails. DDI on Rails is a data portal, optimized for the documentation and dissemination of panel data. The design considers the process model of the GLBPM, the generic discussion of digital objects, the design of a metadata-driven infrastructure, and the proposed solutions for variable linkage

    Metadata-Driven Creation of Data Marts From an EAV-Modeled Clinical Research Database

    No full text
    Generic clinical study data management systems can record data on an arbitrary number of parameters in an arbitrary number of clinical studies without requiring modification of the database schema. They achieve this by using an Entity-Attribute-Value (EAV) model for clinical data. While very flexible for creating transaction-oriented systems for data entry and browsing of individual forms, EAV-modeled data is unsuitable for direct analytical processing, which is the focus of data marts. For this purpose, such data must be extracted and restructured appropriately. This paper describes how such a process, which is non-trivial and highly error prone if performed using non-systematic approaches, can be automated by judicious use of the study metadata—the descriptions of measured parameters and their higher-level grouping. The metadata, in addition to driving the process, is exported along with the data, in order to facilitate its human interpretation
    corecore