59 research outputs found

    Transforming and evaluating electronic health record disease phenotyping algorithms using the OMOP common data model: a case study in heart failure

    Get PDF
    Objective: The aim of the study was to transform a resource of linked electronic health records (EHR) to the OMOP common data model (CDM) and evaluate the process in terms of syntactic and semantic consistency and quality when implementing disease and risk factor phenotyping algorithms. Materials and Methods: Using heart failure (HF) as an exemplar, we represented three national EHR sources (Clinical Practice Research Datalink, Hospital Episode Statistics Admitted Patient Care, Office for National Statistics) into the OMOP CDM 5.2. We compared the original and CDM HF patient population by calculating and presenting descriptive statistics of demographics, related comorbidities, and relevant clinical biomarkers. Results: We identified a cohort of 502 536 patients with the incident and prevalent HF and converted 1 099 195 384 rows of data from 216 581 914 encounters across three EHR sources to the OMOP CDM. The largest percentage (65%) of unmapped events was related to medication prescriptions in primary care. The average coverage of source vocabularies was >98% with the exception of laboratory tests recorded in primary care. The raw and transformed data were similar in terms of demographics and comorbidities with the largest difference observed being 3.78% in the prevalence of chronic obstructive pulmonary disease (COPD). Conclusion: Our study demonstrated that the OMOP CDM can successfully be applied to convert EHR linked across multiple healthcare settings and represent phenotyping algorithms spanning multiple sources. Similar to previous research, challenges mapping primary care prescriptions and laboratory measurements still persist and require further work. The use of OMOP CDM in national UK EHR is a valuable research tool that can enable large-scale reproducible observational research

    Satisfaction of patients on chronic haemodialysis and peritoneal dialysis

    Get PDF
    BACKGROUND: In contrast to quality of life, patient satisfaction on chronic haemodialysis (HD) and peritoneal dialysis (PD) has only rarely been studied. PATIENTS AND METHODS: All chronic HD and PD patients of the 19 centres located in western Switzerland were asked to complete a specific questionnaire, assessing dialysis centre characteristics, treatment modalities, and information received before and during dialysis treatment. Comparison between satisfaction with PD and HD was carried out on the patients in the nine centres offering both treatment modalities. RESULTS: Of the 558 questionnaires distributed to chronic HD patients, 455 were returned (response rate 82%). Fifty of 64 PD patients (78%) returned the questionnaire. The two groups were similar in age, gender, and duration of dialysis treatment. Completion rates were >90% for a majority of questions, with the lowest rate for information on sexuality (49% in HD and 54% in PD respectively). The lowest scores were recorded for information received about complications and costs of dialysis, and impact of end-stage kidney disease on sexuality. Satisfaction was lower in anonymous questionnaires. Satisfaction of PD patients was significantly better in 50% of the questions, particularly session tolerance (p<0.001), information about dialysis sessions (p=0.007), and complications (p=0.006). CONCLUSIONS: PD patients were on average more satisfied with their treatment than HD patients. Satisfaction could be improved with more information about potential adverse treatment effects

    Global model simulations of air pollution during the 2003 European heat wave

    Get PDF
    Three global Chemistry Transport Models - MOZART, MOCAGE, and TM5 - as well as MOZART coupled to the IFS meteorological model including assimilation of ozone (O-3) and carbon monoxide (CO) satellite column retrievals, have been compared to surface measurements and MOZAIC vertical profiles in the troposphere over Western/Central Europe for summer 2003. The models reproduce the meteorological features and enhancement of pollution during the period 2-14 August, but not fully the ozone and CO mixing ratios measured during that episode. Modified normalised mean biases are around -25% (except similar to 5% for MOCAGE) in the case of ozone and from -80% to -30% for CO in the boundary layer above Frankfurt. The coupling and assimilation of CO columns from MOPITT overcomes some of the deficiencies in the treatment of transport, chemistry and emissions in MOZART, reducing the negative biases to around 20%. The high reactivity and small dry deposition velocities in MOCAGE seem to be responsible for the overestimation of O-3 in this model. Results from sensitivity simulations indicate that an increase of the horizontal resolution to around 1 degrees x1 degrees and potential uncertainties in European anthropogenic emissions or in long-range transport of pollution cannot completely account for the underestimation of CO and O-3 found for most models. A process-oriented TM5 sensitivity simulation where soil wetness was reduced results in a decrease in dry deposition fluxes and a subsequent ozone increase larger than the ozone changes due to the previous sensitivity runs. However this latest simulation still underestimates ozone during the heat wave and overestimates it outside that period. Most probably, a combination of the mentioned factors together with underrepresented biogenic emissions in the models, uncertainties in the modelling of vertical/horizontal transport processes in the proximity of the boundary layer as well as limitations of the chemistry schemes are responsible for the underestimation of ozone (overestimation in the case of MOCAGE) and CO found in the models during this extreme pollution event

    Global model simulations of air pollution during the 2003 European heat wave

    Get PDF
    Three global Chemistry Transport Models – MOZART, MOCAGE, and TM5 – as well as MOZART coupled to the IFS meteorological model including assimilation of ozone (O<sub>3</sub>) and carbon monoxide (CO) satellite column retrievals, have been compared to surface measurements and MOZAIC vertical profiles in the troposphere over Western/Central Europe for summer 2003. The models reproduce the meteorological features and enhancement of pollution during the period 2–14 August, but not fully the ozone and CO mixing ratios measured during that episode. Modified normalised mean biases are around −25% (except ~5% for MOCAGE) in the case of ozone and from −80% to −30% for CO in the boundary layer above Frankfurt. The coupling and assimilation of CO columns from MOPITT overcomes some of the deficiencies in the treatment of transport, chemistry and emissions in MOZART, reducing the negative biases to around 20%. The high reactivity and small dry deposition velocities in MOCAGE seem to be responsible for the overestimation of O<sub>3</sub> in this model. Results from sensitivity simulations indicate that an increase of the horizontal resolution to around 1°×1° and potential uncertainties in European anthropogenic emissions or in long-range transport of pollution cannot completely account for the underestimation of CO and O<sub>3</sub> found for most models. A process-oriented TM5 sensitivity simulation where soil wetness was reduced results in a decrease in dry deposition fluxes and a subsequent ozone increase larger than the ozone changes due to the previous sensitivity runs. However this latest simulation still underestimates ozone during the heat wave and overestimates it outside that period. Most probably, a combination of the mentioned factors together with underrepresented biogenic emissions in the models, uncertainties in the modelling of vertical/horizontal transport processes in the proximity of the boundary layer as well as limitations of the chemistry schemes are responsible for the underestimation of ozone (overestimation in the case of MOCAGE) and CO found in the models during this extreme pollution event

    Collaborative annotation of genes and proteins between UniProtKB/Swiss-Prot and dictyBase

    Get PDF
    UniProtKB/Swiss-Prot, a curated protein database, and dictyBase, the Model Organism Database for Dictyostelium discoideum, have established a collaboration to improve data sharing. One of the major steps in this effort was the ‘Dicty annotation marathon’, a week-long exercise with 30 annotators aimed at achieving a major increase in the number of D. discoideum proteins represented in UniProtKB/Swiss-Prot. The marathon led to the annotation of over 1000 D. discoideum proteins in UniProtKB/Swiss-Prot. Concomitantly, there were a large number of updates in dictyBase concerning gene symbols, protein names and gene models. This exercise demonstrates how UniProtKB/Swiss-Prot can work in very close cooperation with model organism databases and how the annotation of proteins can be accelerated through those collaborations

    The UniProt-GO Annotation database in 2011

    Get PDF
    The GO annotation dataset provided by the UniProt Consortium (GOA: http://www.ebi.ac.uk/GOA) is a comprehensive set of evidenced-based associations between terms from the Gene Ontology resource and UniProtKB proteins. Currently supplying over 100 million annotations to 11 million proteins in more than 360 000 taxa, this resource has increased 2-fold over the last 2 years and has benefited from a wealth of checks to improve annotation correctness and consistency as well as now supplying a greater information content enabled by GO Consortium annotation format developments. Detailed, manual GO annotations obtained from the curation of peer-reviewed papers are directly contributed by all UniProt curators and supplemented with manual and electronic annotations from 36 model organism and domain-focused scientific resources. The inclusion of high-quality, automatic annotation predictions ensures the UniProt GO annotation dataset supplies functional information to a wide range of proteins, including those from poorly characterized, non-model organism species. UniProt GO annotations are freely available in a range of formats accessible by both file downloads and web-based views. In addition, the introduction of a new, normalized file format in 2010 has made for easier handling of the complete UniProt-GOA data set

    Systemizing Virtual Learning and Technologies by Managing Organizational Competency and Talents

    Get PDF
    The article presents promising components and practices of virtual learning and technologies and discusses how systemization can be made through managing organizational competency and talents. The main goal is to suggest how technologies should be incorporated within an organization to improve the effectiveness of employees’ learning, performance, and development. For technology implementation and adoption, we also introduce models for examining organizational maturity levels and integrating technologies.We argue that virtual learning and technologies are fundamentally pressing HRD roles to change from experts of learning and development to work solution partners leading and supporting the creation of a smart organization.Yeshttps://us.sagepub.com/en-us/nam/manuscript-submission-guideline

    Low incidence of SARS-CoV-2, risk factors of mortality and the course of illness in the French national cohort of dialysis patients

    Get PDF

    Introducing PIONEER: a project to harness big data in prostate cancer research

    Get PDF
    Prostate Cancer Diagnosis and Treatment Enhancement Through the Power of Big Data in Europe (PIONEER) is a European network of excellence for big data in prostate cancer, consisting of 32 private and public stakeholders from 9 countries across Europe. Launched by the Innovative Medicines Initiative 2 and part of the Big Data for Better Outcomes Programme (BD4BO), the overarching goal of PIONEER is to provide high-quality evidence on prostate cancer management by unlocking the potential of big data. The project has identified critical evidence gaps in prostate cancer care, via a detailed prioritization exercise including all key stakeholders. By standardizing and integrating existing high-quality and multidisciplinary data sources from patients with prostate cancer across different stages of the disease, the resulting big data will be assembled into a single innovative data platform for research. Based on a unique set of methodologies, PIONEER aims to advance the field of prostate cancer care with a particular focus on improving prostate-cancer-related outcomes, health system efficiency by streamlining patient management, and the quality of health and social care delivered to all men with prostate cancer and their families worldwide.Prostate Cancer Diagnosis and Treatment Enhancement Through the Power of Big Data in Europe (PIONEER) is a European network of excellence for big data in prostate cancer, consisting of 32 private and public stakeholders from 9 countries across Europe. In this Perspectives article, the authors introduce the PIONEER project and describe its aims and plans for ultimately improving prostate cancer care through the use of big data

    The Gene Ontology: enhancements for 2011

    Get PDF
    The Gene Ontology (GO) (http://www.geneontology.org) is a community bioinformatics resource that represents gene product function through the use of structured, controlled vocabularies. The number of GO annotations of gene products has increased due to curation efforts among GO Consortium (GOC) groups, including focused literature-based annotation and ortholog-based functional inference. The GO ontologies continue to expand and improve as a result of targeted ontology development, including the introduction of computable logical definitions and development of new tools for the streamlined addition of terms to the ontology. The GOC continues to support its user community through the use of e-mail lists, social media and web-based resources
    corecore