4,053 research outputs found

    Validation of a common data model for active safety surveillance research

    Get PDF
    Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example

    Common Problems, Common Data Model Solutions: Evidence Generation for Health Technology Assessment

    Get PDF
    There is growing interest in using observational data to assess the safety, effectiveness, and cost effectiveness of medical technologies, but operational, technical, and methodological challenges limit its more widespread use. Common data models and federated data networks offer a potential solution to many of these problems. The open-source Observational and Medical Outcomes Partnerships (OMOP) common data model standardises the structure, format, and terminologies of otherwise disparate datasets, enabling the execution of common analytical code across a federated data network in which only code and aggregate results are shared. While common data models are increasingly used in regulatory decision making, relatively little attention has been given to their use in health technology assessment (HTA). We show that the common data model has the potential to facilitate access to relevant data, enable multidatabase studies to enhance statistical power and transfer results across populations and settings to meet the needs of local HTA decision makers, and validate findings. The use of open-source and standardised analytics improves transparency and reduces coding errors, thereby increasing confidence in the results. Further engagement from the HTA community is required to inform the appropriate standards for mapping data to the common data model and to design tools that can support evidence generation and decision making

    Databases in the Asia-Pacific Region: The Potential for a Distributed Network Approach

    Get PDF
    Background: This study describes the availability and characteristics of databases in Asian-Pacific countries and assesses the feasibility of a distributed network approach in the region. Methods: A web-based survey was conducted among investigators using healthcare databases in the Asia-Pacific countries. Potential survey participants were identified through the Asian Pharmacoepidemiology Network. Results: Investigators from a total of 11 databases participated in the survey. Database sources included four nationwide claims databases from Japan, South Korea, and Taiwan; two nationwide electronic health records from Hong Kong and Singapore; a regional electronic health record from western China; two electronic health records from Thailand; and cancer and stroke registries from Taiwan. Conclusions: We identified 11 databases with capabilities for distributed network approaches. Many country-specific coding systems and terminologies have been already converted to international coding systems. The harmonization of health expenditure data is a major obstacle for future investigations attempting to evaluate issues related to medical costs.postprin

    Common Problems, Common Data Model Solutions: Evidence Generation for Health Technology Assessment

    Get PDF
    There is growing interest in using observational data to assess the safety, effectiveness, and cost effectiveness of medical technologies, but operational, technical, and methodological challenges limit its more widespread use. Common data models and federated data networks offer a potential solution to many of these problems. The open-source Observational and Medical Outcomes Partnerships (OMOP) common data model standardises the structure, format, and terminologies of otherwise disparate datasets, enabling the execution of common analytical code across a federated data network in which only code and aggregate results are shared. While common data models are increasingly used in regulatory decision making, relatively little attention has been given to their use in health technology assessment (HTA). We show that the common data model has the potential to facilitate access to relevant data, enable multidatabase studies to enhance statistical power and transfer results across populations and settings to meet the needs of local HTA decision makers, and validate findings. The use of open-source and standardised analytics improves transparency and reduces coding errors, thereby increasing confidence in the results. Further engagement from the HTA community is required to inform the appropriate standards for mapping data to the common data model and to design tools that can support evidence generation and decision making

    Enhancing drug safety through active surveillance of observational healthcare data

    Get PDF
    Drug safety continues to be a major public health concern in the United States, with adverse drug reactions ranking as the 4th to 6th leading cause of death, and resulting in health care costs of $3.6 billion annually. Recent media attention and public scrutiny of high-profile drug safety issues have increased visibility and skepticism of the effectiveness of the current post-approval safety surveillance processes. Current proposals suggest establishing a national active drug safety surveillance system that leverages observational data, including administrative claims and electronic health records, to monitor and evaluate potential safety issues of medicines. However, the development and evaluation of appropriate strategies for systematic analysis of observational data have not yet been studied. This study introduces a novel exploratory analysis approach (Comparator-Adjusted Safety Surveillance or COMPASS) to identify drug-related adverse events in automated healthcare data. The aims of the study were: 1) to characterize the performance of COMPASS in identifying known safety issues associated with ACE inhibitor exposure within an administrative claims database; 2) to evaluate consistency of COMPASS estimates across a network of disparate databases; and 3) to explore differential effects across ingredients within ACE inhibitor class. COMPASS was observed to have improved accuracy to three other methods under consideration for an active surveillance system: observational screening, disproportionality analysis, and self-controlled case series. COMPASS performance was consistently strong within 5 different databases, though important differences in outcome estimates across the sources highlighted the substantial heterogeneity which makes pooling estimates challenging. The comparative safety analysis of products within the ACE inhibitor class provided evidence of similar risk profiles across an array of different outcomes, and raised questions about the product labeling differences and how observational studies should complement existing evidence as part of a broader safety assessment strategy. The results of this study should inform decisions about the appropriateness and utility of analyzing observational data as part of an active drug safety surveillance process. An improved surveillance system would enable a more comprehensive and timelier understanding of the safety of medicines. Such information supports patients and providers in therapeutic decision-making to minimize risks and improve the quality of care

    Annotation analysis for testing drug safety signals using unstructured clinical notes

    Get PDF
    BackgroundThe electronic surveillance for adverse drug events is largely based upon the analysis of coded data from reporting systems. Yet, the vast majority of electronic health data lies embedded within the free text of clinical notes and is not gathered into centralized repositories. With the increasing access to large volumes of electronic medical data-in particular the clinical notes-it may be possible to computationally encode and to test drug safety signals in an active manner.ResultsWe describe the application of simple annotation tools on clinical text and the mining of the resulting annotations to compute the risk of getting a myocardial infarction for patients with rheumatoid arthritis that take Vioxx. Our analysis clearly reveals elevated risks for myocardial infarction in rheumatoid arthritis patients taking Vioxx (odds ratio 2.06) before 2005.ConclusionsOur results show that it is possible to apply annotation analysis methods for testing hypotheses about drug safety using electronic medical records

    Towards evidence-based, GIS-driven national spatial health information infrastructure and surveillance services in the United Kingdom

    Get PDF
    The term "Geographic Information Systems" (GIS) has been added to MeSH in 2003, a step reflecting the importance and growing use of GIS in health and healthcare research and practices. GIS have much more to offer than the obvious digital cartography (map) functions. From a community health perspective, GIS could potentially act as powerful evidence-based practice tools for early problem detection and solving. When properly used, GIS can: inform and educate (professionals and the public); empower decision-making at all levels; help in planning and tweaking clinically and cost-effective actions, in predicting outcomes before making any financial commitments and ascribing priorities in a climate of finite resources; change practices; and continually monitor and analyse changes, as well as sentinel events. Yet despite all these potentials for GIS, they remain under-utilised in the UK National Health Service (NHS). This paper has the following objectives: (1) to illustrate with practical, real-world scenarios and examples from the literature the different GIS methods and uses to improve community health and healthcare practices, e.g., for improving hospital bed availability, in community health and bioterrorism surveillance services, and in the latest SARS outbreak; (2) to discuss challenges and problems currently hindering the wide-scale adoption of GIS across the NHS; and (3) to identify the most important requirements and ingredients for addressing these challenges, and realising GIS potential within the NHS, guided by related initiatives worldwide. The ultimate goal is to illuminate the road towards implementing a comprehensive national, multi-agency spatio-temporal health information infrastructure functioning proactively in real time. The concepts and principles presented in this paper can be also applied in other countries, and on regional (e.g., European Union) and global levels

    Data harmonization and federated learning for multi-cohort dementia research using the OMOP common data model:A Netherlands consortium of dementia cohorts case study

    Get PDF
    Background: Establishing collaborations between cohort studies has been fundamental for progress in health research. However, such collaborations are hampered by heterogeneous data representations across cohorts and legal constraints to data sharing. The first arises from a lack of consensus in standards of data collection and representation across cohort studies and is usually tackled by applying data harmonization processes. The second is increasingly important due to raised awareness for privacy protection and stricter regulations, such as the GDPR. Federated learning has emerged as a privacy-preserving alternative to transferring data between institutions through analyzing data in a decentralized manner. Methods: In this study, we set up a federated learning infrastructure for a consortium of nine Dutch cohorts with appropriate data available to the etiology of dementia, including an extract, transform, and load (ETL) pipeline for data harmonization. Additionally, we assessed the challenges of transforming and standardizing cohort data using the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) and evaluated our tool in one of the cohorts employing federated algorithms. Results: We successfully applied our ETL tool and observed a complete coverage of the cohortsโ€™ data by the OMOP CDM. The OMOP CDM facilitated the data representation and standardization, but we identified limitations for cohort-specific data fields and in the scope of the vocabularies available. Specific challenges arise in a multi-cohort federated collaboration due to technical constraints in local environments, data heterogeneity, and lack of direct access to the data. Conclusion: In this article, we describe the solutions to these challenges and limitations encountered in our study. Our study shows the potential of federated learning as a privacy-preserving solution for multi-cohort studies that enhance reproducibility and reuse of both data and analyses.</p

    Data harmonization and federated learning for multi-cohort dementia research using the OMOP common data model:A Netherlands consortium of dementia cohorts case study

    Get PDF
    Background: Establishing collaborations between cohort studies has been fundamental for progress in health research. However, such collaborations are hampered by heterogeneous data representations across cohorts and legal constraints to data sharing. The first arises from a lack of consensus in standards of data collection and representation across cohort studies and is usually tackled by applying data harmonization processes. The second is increasingly important due to raised awareness for privacy protection and stricter regulations, such as the GDPR. Federated learning has emerged as a privacy-preserving alternative to transferring data between institutions through analyzing data in a decentralized manner. Methods: In this study, we set up a federated learning infrastructure for a consortium of nine Dutch cohorts with appropriate data available to the etiology of dementia, including an extract, transform, and load (ETL) pipeline for data harmonization. Additionally, we assessed the challenges of transforming and standardizing cohort data using the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) and evaluated our tool in one of the cohorts employing federated algorithms. Results: We successfully applied our ETL tool and observed a complete coverage of the cohortsโ€™ data by the OMOP CDM. The OMOP CDM facilitated the data representation and standardization, but we identified limitations for cohort-specific data fields and in the scope of the vocabularies available. Specific challenges arise in a multi-cohort federated collaboration due to technical constraints in local environments, data heterogeneity, and lack of direct access to the data. Conclusion: In this article, we describe the solutions to these challenges and limitations encountered in our study. Our study shows the potential of federated learning as a privacy-preserving solution for multi-cohort studies that enhance reproducibility and reuse of both data and analyses.</p

    Harnessing Openness to Transform American Health Care

    Get PDF
    The Digital Connections Council (DCC) of the Committee for Economic Development (CED) has been developing the concept of openness in a series of reports. It has analyzed information and processes to determine their openness based on qualities of "accessibility" and "responsiveness." If information is not available or available only under restrictive conditions it is less accessible and therefore less "open." If information can be modified, repurposed, and redistributed freely it is more responsive, and therefore more "open." This report looks at how "openness" is being or might usefully be employed in the healthcare arena. This area, which now constitutes approximately 16-17 percent of GDP, has long frustrated policymakers, practitioners, and patients. Bringing greater openness to different parts of the healthcare production chain can lead to substantial benefits by stimulating innovation, lowering costs, reducing errors, and closing the gap between discovery and treatment delivery. The report is not exhaustive; it focuses on biomedical research and the disclosure of research findings, processes of evaluating drugs and devices, the emergence of electronic health records, the development and implementation of treatment regimes by caregivers and patients, and the interdependence of the global public health system and data sharing and worldwide collaboration
    • โ€ฆ
    corecore