20 research outputs found

    An Evaluation of the Use of a Clinical Research Data Warehouse and I2b2 Infrastructure to Facilitate Replication of Research

    Get PDF
    Replication of clinical research is requisite for forming effective clinical decisions and guidelines. While rerunning a clinical trial may be unethical and prohibitively expensive, the adoption of EHRs and the infrastructure for distributed research networks provide access to clinical data for observational and retrospective studies. Herein I demonstrate a means of using these tools to validate existing results and extend the findings to novel populations. I describe the process of evaluating published risk models as well as local data and infrastructure to assess the replicability of the study. I use an example of a risk model unable to be replicated as well as a study of in-hospital mortality risk I replicated using UNMC’s clinical research data warehouse. In these examples and other studies we have participated in, some elements are commonly missing or under-developed. One such missing element is a consistent and computable phenotype for pregnancy status based on data recorded in the EHR. I survey local clinical data and identify a number of variables correlated with pregnancy as well as demonstrate the data required to identify the temporal bounds of a pregnancy episode. Next, another common obstacle to replicating risk models is the necessity of linking to alternative data sources while maintaining data in a de-identified database. I demonstrate a pipeline for linking clinical data to socioeconomic variables and indices obtained from the American Community Survey (ACS). While these data are location-based, I provide a method for storing them in a HIPAA compliant fashion so as not to identify a patient’s location. While full and efficient replication of all clinical studies is still a future goal, the demonstration of replication as well as beginning the development of a computable phenotype for pregnancy and the incorporation of location based data in a de-identified data warehouse demonstrate how the EHR data and a research infrastructure may be used to facilitate this effort

    Systematizing FAIR research data management in biomedical research projects: a data life cycle approach

    Get PDF
    Biomedical researchers are facing data management challenges brought by a new generation of data driven by the advent of translational medicine research. These challenges are further complicated by the recent calls for data re-use and long-term stewardship spearheaded by the FAIR principles initiative. As a result, there is an increasingly wide-spread recognition that advancing biomedical science is becoming dependent on the application of data science to manage and utilize highly diverse and complex data in ways that give it context, meaning, and longevity beyond its initial purpose. However, current methods and practices in biomedical informatics remain to adopt a traditional linear view of the informatics process (collect, store and analyse); focusing primarily on the challenges in data integration and analysis, which are challenges only pertaining to a part of the overall life cycle of research data. The aim of this research is to facilitate the adoption and integration of data management practices into the research life cycle of biomedical projects, thus improving their capabilities into solving data management-related challenges that they face throughout the course of their research work. To achieve this aim, this thesis takes a data life cycle approach to define and develop a systematic methodology and framework towards the systematization of FAIR data management in biomedical research projects. The overarching contribution of this research is the provision of a data-state life cycle model for research data management in Biomedical Translational Research Projects. This model provides insight into the dynamics between 1) the purpose of a research-driven data use case, 2) the data requirements that renders data in a state fit for purpose, 3) the data management functions that prepare and act upon data and 4) the resulting state of data that is _t to serve the use case. This insight led to the development of a FAIR data management framework, which is another contribution of this thesis. This framework provides data managers the groundwork, including the data models, resources and capabilities, needed to build a FAIR data management environment to manage data during the operational stages of a biomedical research project. An exemplary implementation of this architecture (PlatformTM) was developed and validated by real-world research datasets produced by collaborative research programs funded by the Innovative Medicine Initiative (IMI) BioVacSafe 1 , eTRIKS 2 and FAIRplus 3.Open Acces

    Facilitating and Enhancing Biomedical Knowledge Translation: An in Silico Approach to Patient-centered Pharmacogenomic Outcomes Research

    Get PDF
    Current research paradigms such as traditional randomized control trials mostly rely on relatively narrow efficacy data which results in high internal validity and low external validity. Given this fact and the need to address many complex real-world healthcare questions in short periods of time, alternative research designs and approaches should be considered in translational research. In silico modeling studies, along with longitudinal observational studies, are considered as appropriate feasible means to address the slow pace of translational research. Taking into consideration this fact, there is a need for an approach that tests newly discovered genetic tests, via an in silico enhanced translational research model (iS-TR) to conduct patient-centered outcomes research and comparative effectiveness research studies (PCOR CER). In this dissertation, it was hypothesized that retrospective EMR analysis and subsequent mathematical modeling and simulation prediction could facilitate and accelerate the process of generating and translating pharmacogenomic knowledge on comparative effectiveness of anticoagulation treatment plan(s) tailored to well defined target populations which eventually results in a decrease in overall adverse risk and improve individual and population outcomes. To test this hypothesis, a simulation modeling framework (iS-TR) was proposed which takes advantage of the value of longitudinal electronic medical records (EMRs) to provide an effective approach to translate pharmacogenomic anticoagulation knowledge and conduct PCOR CER studies. The accuracy of the model was demonstrated by reproducing the outcomes of two major randomized clinical trials for individualizing warfarin dosing. A substantial, hospital healthcare use case that demonstrates the value of iS-TR when addressing real world anticoagulation PCOR CER challenges was also presented

    Secondary use of Structured Electronic Health Records Data: From Observational Studies to Deep Learning-based Predictive Modeling

    Get PDF
    With the wide adoption of electronic health records (EHRs), researchers, as well as large healthcare organizations, governmental institutions, insurance, and pharmaceutical companies have been interested in leveraging this rich clinical data source to extract clinical evidence and develop predictive algorithms. Large vendors have been able to compile structured EHR data from sites all over the United States, de-identify these data, and make them available to data science researchers in a more usable format. For this dissertation, we leveraged one of the earliest and largest secondary EHR data sources and conducted three studies of increasing scope. In the first study, which was of limited scope, we conducted a retrospective observational study to compare the effect of three drugs on a specific population of approximately 3,000 patients. Using a novel statistical method, we found evidence that the selection of phenylephrine as the primary vasopressor to induce hypertension for the management of nontraumatic subarachnoid hemorrhage is associated with better outcomes as compared to selecting norepinephrine or dopamine. In the second study, we widened our scope, using a cohort of more than 100,000 patients to train generalizable models for the risk prediction of specific clinical events, such as heart failure in diabetes patients or pancreatic cancer. In this study, we found that recurrent neural network-based predictive models trained on expressive terminologies, which preserve a high level of granularity, are associated with better prediction performance as compared with other baseline methods, such as logistic regression. Finally, we widened our scope again, to train Med-BERT, a foundation model, on more than 20 million patients’ diagnosis data. Med-BERT was found to improve the prediction performance of downstream tasks that have a small sample size, which otherwise would limit the ability of the model to learn good representation. In conclusion, we found that we can extract useful information and train helpful deep learning-based predictive models. Given the limitations of secondary EHR data and taking into consideration that the data were originally collected for administrative and not research purposes, however, the findings need clinical validation. Therefore, clinical trials are warranted to further validate any new evidence extracted from such data sources before updating clinical practice guidelines. The implementability of the developed predictive models, which are in an early development phase, also warrants further evaluation

    Addendum to Informatics for Health 2017: Advancing both science and practice

    Get PDF
    This article presents presentation and poster abstracts that were mistakenly omitted from the original publication

    Comparative study of healthcare messaging standards for interoperability in ehealth systems

    Get PDF
    Advances in the information and communication technology have created the field of "health informatics," which amalgamates healthcare, information technology and business. The use of information systems in healthcare organisations dates back to 1960s, however the use of technology for healthcare records, referred to as Electronic Medical Records (EMR), management has surged since 1990’s (Net-Health, 2017) due to advancements the internet and web technologies. Electronic Medical Records (EMR) and sometimes referred to as Personal Health Record (PHR) contains the patient’s medical history, allergy information, immunisation status, medication, radiology images and other medically related billing information that is relevant. There are a number of benefits for healthcare industry when sharing these data recorded in EMR and PHR systems between medical institutions (AbuKhousa et al., 2012). These benefits include convenience for patients and clinicians, cost-effective healthcare solutions, high quality of care, resolving the resource shortage and collecting a large volume of data for research and educational needs. My Health Record (MyHR) is a major project funded by the Australian government, which aims to have all data relating to health of the Australian population stored in digital format, allowing clinicians to have access to patient data at the point of care. Prior to 2015, MyHR was known as Personally Controlled Electronic Health Record (PCEHR). Though the Australian government took consistent initiatives there is a significant delay (Pearce and Haikerwal, 2010) in implementing eHealth projects and related services. While this delay is caused by many factors, interoperability is identified as the main problem (Benson and Grieve, 2016c) which is resisting this project delivery. To discover the current interoperability challenges in the Australian healthcare industry, this comparative study is conducted on Health Level 7 (HL7) messaging models such as HL7 V2, V3 and FHIR (Fast Healthcare Interoperability Resources). In this study, interoperability, security and privacy are main elements compared. In addition, a case study conducted in the NSW Hospitals to understand the popularity in usage of health messaging standards was utilised to understand the extent of use of messaging standards in healthcare sector. Predominantly, the project used the comparative study method on different HL7 (Health Level Seven) messages and derived the right messaging standard which is suitable to cover the interoperability, security and privacy requirements of electronic health record. The issues related to practical implementations, change over and training requirements for healthcare professionals are also discussed

    Intégration de ressources en recherche translationnelle : une approche unificatrice en support des systèmes de santé "apprenants"

    Get PDF
    Learning health systems (LHS) are gradually emerging and propose a complimentary approach to translational research challenges by implementing close coupling of health care delivery, research and knowledge translation. To support coherent knowledge sharing, the system needs to rely on an integrated and efficient data integration platform. The framework and its theoretical foundations presented here aim at addressing this challenge. Data integration approaches are analysed in light of the requirements derived from LHS activities and data mediation emerges as the one most adapted for a LHS. The semantics of clinical data found in biomedical sources can only be fully derived by taking into account, not only information from the structural models (field X of table Y), but also terminological information (e.g. International Classification of Disease 10th revision) used to encode facts. The unified framework proposed here takes this into account. The platform has been implemented and tested in context of the TRANSFoRm endeavour, a European project funded by the European commission. It aims at developing a LHS including clinical activities in primary care. The mediation model developed for the TRANSFoRm project, the Clinical Data Integration Model, is presented and discussed. Results from TRANSFoRm use-cases are presented. They illustrate how a unified data sharing platform can support and enhance prospective research activities in context of a LHS. In the end, the unified mediation framework presented here allows sufficient expressiveness for the TRANSFoRm needs. It is flexible, modular and the CDIM mediation model supports the requirements of a primary care LHS.Les systèmes de santé "apprenants" (SSA) présentent une approche complémentaire et émergente aux problèmes de la recherche translationnelle en couplant de près les soins de santé, la recherche et le transfert de connaissances. Afin de permettre un flot d’informations cohérent et optimisé, le système doit se doter d’une plateforme intégrée de partage de données. Le travail présenté ici vise à proposer une approche de partage de données unifiée pour les SSA. Les grandes approches d’intégration de données sont analysées en fonction du SSA. La sémantique des informations cliniques disponibles dans les sources biomédicales est la résultante des connaissances des modèles structurelles des sources mais aussi des connaissances des modèles terminologiques utilisés pour coder l’information. Les mécanismes de la plateforme unifiée qui prennent en compte cette interdépendance sont décrits. La plateforme a été implémentée et testée dans le cadre du projet TRANSFoRm, un projet européen qui vise à développer un SSA. L’instanciation du modèle de médiation pour le projet TRANSFoRm, le Clinical Data Integration Model est analysée. Sont aussi présentés ici les résultats d’un des cas d’utilisation de TRANSFoRm pour supporter la recherche afin de donner un aperçu concret de l’impact de la plateforme sur le fonctionnement du SSA. Au final, la plateforme unifiée d’intégration proposée ici permet un niveau d’expressivité suffisant pour les besoins de TRANSFoRm. Le système est flexible et modulaire et le modèle de médiation CDIM couvre les besoins exprimés pour le support des activités d’un SSA comme TRANSFoRm

    Front-Line Physicians' Satisfaction with Information Systems in Hospitals

    Get PDF
    Day-to-day operations management in hospital units is difficult due to continuously varying situations, several actors involved and a vast number of information systems in use. The aim of this study was to describe front-line physicians' satisfaction with existing information systems needed to support the day-to-day operations management in hospitals. A cross-sectional survey was used and data chosen with stratified random sampling were collected in nine hospitals. Data were analyzed with descriptive and inferential statistical methods. The response rate was 65 % (n = 111). The physicians reported that information systems support their decision making to some extent, but they do not improve access to information nor are they tailored for physicians. The respondents also reported that they need to use several information systems to support decision making and that they would prefer one information system to access important information. Improved information access would better support physicians' decision making and has the potential to improve the quality of decisions and speed up the decision making process.Peer reviewe

    Addendum to Informatics for Health 2017 : advancing both science and practice

    Get PDF
    This article presents presentation and poster abstracts that were mistakenly omitted from the original publication.Publisher PDFPeer reviewe
    corecore