103 research outputs found

    TASKA: A modular task management system to support health research studies

    Get PDF

    TreatmentPatterns:An R package to facilitate the standardized development and analysis of treatment patterns across disease domains

    Get PDF
    Background and objectives: There is an increasing interest to use real-world data to illustrate how patients with specific medical conditions are treated in real life. Insight in the current treatment practices helps to improve and tailor patient care, but is often held back by a lack of data interoperability and a high-level of required resources. We aimed to provide an easy tool that overcomes these barriers to support the standardized development and analysis of treatment patterns for a wide variety of medical conditions. Methods: We formally defined the process of constructing treatment pathways and implemented this in an open-source R package TreatmentPatterns (https://github.com/mi-erasmusmc/TreatmentPatterns) to enable a reproducible and timely analysis of treatment patterns. Results: The developed package supports the analysis of treatment patterns of a study population of interest. We demonstrate the functionality of the package by analyzing the treatment patterns of three common chronic diseases (type II diabetes mellitus, hypertension, and depression) in the Dutch Integrated Primary Care Information (IPCI) database. Conclusion: TreatmentPatterns is a tool to make the analysis of treatment patterns more accessible, more standardized, and more interpretation friendly. We hope it thereby contributes to the accumulation of knowledge on real-world treatment patterns across disease domains. We encourage researchers to further adjust and add custom analysis to the R package based on their research needs.</p

    90-Day all-cause mortality can be predicted following a total knee replacement:an international, network study to develop and validate a prediction model

    Get PDF
    Purpose: The purpose of this study was to develop and validate a prediction model for 90-day mortality following a total knee replacement (TKR). TKR is a safe and cost-effective surgical procedure for treating severe knee osteoarthritis (OA). Although complications following surgery are rare, prediction tools could help identify high-risk patients who could be targeted with preventative interventions. The aim was to develop and validate a simple model to help inform treatment choices. Methods: A mortality prediction model for knee OA patients following TKR was developed and externally validated using a US claims database and a UK general practice database. The target population consisted of patients undergoing a primary TKR for knee OA, aged ā‰„ 40Ā years and registered for ā‰„ 1Ā year before surgery. LASSO logistic regression models were developed for post-operative (90-day) mortality. A second mortality model was developed with a reduced feature set to increase interpretability and usability. Results: A total of 193,615 patients were included, with 40,950 in The Health Improvement Network (THIN) database and 152,665 in Optum. The full model predicting 90-day mortality yielded AUROC of 0.78 when trained in OPTUM and 0.70 when externally validated on THIN. The 12 variable model achieved internal AUROC of 0.77 and external AUROC of 0.71 in THIN. Conclusions: A simple prediction model based on sex, age, and 10 comorbidities that can identify patients at high risk of short-term mortality following TKR was developed that demonstrated good, robust performance. The 12-feature mortality model is easily implemented and the performance suggests it could be used to inform evidence based shared decision-making prior to surgery and targeting prophylaxis for those at high risk. Level of evidence: III.</p

    Using the data quality dashboard to improve the ehden network

    Get PDF
    Federated networks of observational health databases have the potential to be a rich resource to inform clinical practice and regulatory decision making. However, the lack of standard data quality processes makes it difficult to know if these data are research ready. The EHDEN COVID-19 Rapid Collaboration Call presented the opportunity to assess how the newly developed open-source tool Data Quality Dashboard (DQD) informs the quality of data in a federated network. Fifteen Data Partners (DPs) from 10 different countries worked with the EHDEN taskforce to map their data to the OMOP CDM. Throughout the process at least two DQD results were collected and compared for each DP. All DPs showed an improvement in their data quality between the first and last run of the DQD. The DQD excelled at helping DPs identify and fix conformance issues but showed less of an impact on completeness and plausibility checks. This is the first study to apply the DQD on multiple, disparate databases across a network. While study-specific checks should still be run, we recommend that all data holders converting their data to the OMOP CDM use the DQD as it ensures conformance to the model specifications and that a database meets a baseline level of completeness and plausibility for use in research.</p

    Blood pressure measurements for diagnosing hypertension in primary care:room for improvement

    Get PDF
    Background:Ā In the adult population, about 50% have hypertension, a risk factor for cardiovascular disease and subsequent premature death. Little is known about the quality of the methods used to diagnose hypertension in primary care.Ā Objectives:Ā The objective was to assess the frequency of use of recognized methods to establish a diagnosis of hypertension, and specifically for OBPM, whether three distinct measurements were taken, and how correctly the blood pressure levels were interpreted.Ā Methods:Ā A retrospective population-based cohort study using electronic medical records of patients aged between 40 and 70 years, who visited their general practitioner (GP) with a new-onset of hypertension in the years 2012, 2016, 2019, and 2020. A visual chart review of the electronic medical records was used to assess the methods employed to diagnose hypertension in a random sample of 500 patients. The blood pressure measurement method was considered complete if three or more valid office blood pressure measurements (OBPM) were performed, or home-based blood pressure measurements (HBPM), the office- based 30-minute method (OBP30), or 24-hour ambulatory blood pressure measurements (24Ā H-ABPM) were used.Ā Results:Ā In all study years, OBPM was the most frequently used method to diagnose new-onset hypertension in patients. The OBP-30 method was used in 0.4% (2012), 4.2% (2016), 10.6% (2019), and 9.8% (2020) of patients respectively, 24Ā H-ABPM in 16.0%, 22.2%, 17.2%, and 19.0% of patients and HBPM measurements in 5.4%, 8.4%, 7.6%, and 7.8% of patients, respectively. A diagnosis of hypertension based on only one or two office measurements occurred in 85.2% (2012), 87.9% (2016), 94.4% (2019), and 96.8% (2020) of all patients with OBPM. In cases of incomplete measurement and incorrect interpretation, medication was still started in 64% of cases in 2012, 56% (2016), 60% (2019), and 73% (2020).Ā Conclusion:Ā OBPM is still the most often used method to diagnose hypertension in primary care. The diagnosis was often incomplete or misinterpreted using incorrect cut-off levels. A small improvement occurred between 2012 and 2016 but no further progress was seen in 2019 or 2020. If hypertension is inappropriately diagnosed, it may result in under treatment or in prolonged, unnecessary treatment of patients. There is room for improvement in the general practice setting.</p

    Electrocardiographic Criteria for Left Ventricular Hypertrophy in Children

    Get PDF
    Previous studies to determine the sensitivity of the electrocardiogram (ECG) for left ventricular hypertrophy (LVH) in children had their imperfections: they were not done on an unselected hospital population, several criteria used in adults were not applied to children, and obsolete limits of normal for the ECG parameters were used. Furthermore, left ventricular mass (LVM) was taken as the reference standard for LVH, with no regard for other clinical evidence. The study population consisted of 832 children from whom a 12-lead ECG and an M-mode echocardiogram were taken on the same day. The validity of the ECG criteria was judged on the basis of an abnormal LVM index, either alone or in combination with other clinical evidence. The ECG criteria were based on recently established age-dependent normal limits. At 95% specificity, the ECG criteria have low sensitivities (<25%) when an elevated LVM index is taken as the reference for LVH. When clinical evidence is also taken into account, the sensitivity improved considerably (<43%). Sensitivities could be further improved when ECG parameters were combined. The sensitivity of the pediatric ECG in detecting LVH is low but depends strongly on the definition of the reference used for validation

    TASKA: A modular task management system to support health research studies

    Get PDF
    Background: Many healthcare databases have been routinely collected over the past decades, to support clinical practice and administrative services. However, their secondary use for research is often hindered by restricted governance rules. Furthermore, health research studies typically involve many participants with complementary roles and responsibilities which require proper process management. Results: From a wide set of requirements collected from European clinical studies, we developed TASKA, a task/workflow management system that helps to cope with the socio-technical issues arising when dealing with multidisciplinary and multi-setting clinical studies. The system is based on a two-layered architecture: 1) the backend engine, which follows a micro-kernel pattern, for extensibility, and RESTful web services, for decoupling from the web clients; 2) and the client, entirely developed in ReactJS, allowing the construction and management of studies through a graphical interface. TASKA is a GNU GPL open source project, accessible at https://github.com/bioinformatics-ua/taska. A demo version is also available at https://bioinformatics.ua.pt/taska. Conclusions: The system is currently used to support feasibility studies across several institutions and countries, in the context of the European Medical Information Framework (EMIF) project. The tool was shown to simplify the set-up of health studies, the management of participants and their roles, as well as the overall governance process

    Trends in the conduct and reporting of clinical prediction model development and validation: a systematic review

    Get PDF
    OBJECTIVES: This systematic review aims to provide further insights into the conduct and reporting of clinical prediction model development and validation over time. We focus on assessing the reporting of information necessary to enable external validation by other investigators.MATERIALS AND METHODS: We searched Embase, Medline, Web-of-Science, Cochrane Library, and Google Scholar to identify studies that developed 1 or more multivariable prognostic prediction models using electronic health record (EHR) data published in the period 2009-2019.RESULTS: We identified 422 studies that developed a total of 579 clinical prediction models using EHR data. We observed a steep increase over the years in the number of developed models. The percentage of models externally validated in the same paper remained at around 10%. Throughout 2009-2019, for both the target population and the outcome definitions, code lists were provided for less than 20% of the models. For about half of the models that were developed using regression analysis, the final model was not completely presented.DISCUSSION: Overall, we observed limited improvement over time in the conduct and reporting of clinical prediction model development and validation. In particular, the prediction problem definition was often not clearly reported, and the final model was often not completely presented.CONCLUSION: Improvement in the reporting of information necessary to enable external validation by other investigators is still urgently needed to increase clinical adoption of developed models.</p

    Finding a short and accurate decision rule in disjunctive normal form by exhaustive search

    Get PDF
    Greedy approaches suffer from a restricted search space which could lead to suboptimal classifiers in terms of performance and classifier size. This study discusses exhaustive search as an alternative to greedy search for learning short and accurate decision rules. The Exhaustive Procedure for LOgic-Rule Extraction (EXPLORE) algorithm is presented, to induce decision rules in disjunctive normal form (DNF) in a systematic and efficient manner. We propose a method based on subsumption to reduce the number of values considered for instantiation in the literals, by taking into account the relational operator without loss of performance. Furthermore, we describe a branch-and-bound approach that makes optimal use of user-defined performance constraints. To improve the generalizability we use a validation set to determine the optimal length of the DNF rule. The performance and size of the DNF rules induced by EXPLORE are compared to those of eight well-known rule learners. Our results show that an exhaustive approach to rule learning in DNF results in significantly smaller classifiers than those of the other rule learners, while securing comparable or even better performance. Clearly, exhaustive search is computer-intensive and may not always be feasible. Nevertheless, based on this study, we believe that exhaustive search should be considered an alternative for greedy search in many problems
    • ā€¦
    corecore