142 research outputs found
Een Nieuwe Koekelt : kloppend groen hart van Ede
De tuinders van VAT-Ede (Vereniging Amateurtuinders) wilden een verkenning uitvoeren naar de mogelijkheden om het bestaande volkstuincomplex De Koekelt om te vormen tot een multifunctioneel tuinenpark. Het volkstuinencomplex De Koekelt biedt door haar ligging en grootte ongekende mogelijkheden om een multifunctioneel tuinenpark te realiseren. Het volkstuinenterrein kan door herstructurering veranderen in een groene zone waar ecologie en milieu de ruimte krijgen, waar meerdere vormen van recreatie mogelijk zijn en waar meer aansluiting ontstaat met de omgevingEen multifunctioneel tuinenpark zou wel eens de groene motor kunnen zijn voor de herstructurering van het hele Peppelensteeggebied
Electrocardiographic Criteria for Left Ventricular Hypertrophy in Children
Previous studies to determine the sensitivity of the electrocardiogram (ECG) for left ventricular hypertrophy (LVH) in children had their imperfections: they were not done on an unselected hospital population, several criteria used in adults were not applied to children, and obsolete limits of normal for the ECG parameters were used. Furthermore, left ventricular mass (LVM) was taken as the reference standard for LVH, with no regard for other clinical evidence. The study population consisted of 832 children from whom a 12-lead ECG and an M-mode echocardiogram were taken on the same day. The validity of the ECG criteria was judged on the basis of an abnormal LVM index, either alone or in combination with other clinical evidence. The ECG criteria were based on recently established age-dependent normal limits. At 95% specificity, the ECG criteria have low sensitivities (<25%) when an elevated LVM index is taken as the reference for LVH. When clinical evidence is also taken into account, the sensitivity improved considerably (<43%). Sensitivities could be further improved when ECG parameters were combined. The sensitivity of the pediatric ECG in detecting LVH is low but depends strongly on the definition of the reference used for validation
Validation of automatic measurement of QT interval variability
Background Increased variability of beat-to-beat QT-interval durations on the electrocardiogram (ECG) has been associated with increased risk for fatal and non-fatal cardiac events. However, techniques for the measurement of QT variability (QTV) have not been validated since a gold standard is not available. In this study, we propose a validation method and illustrate its use for the validation of two automatic QTV measurement techniques. Methods Our method generates artificial standard 12-lead ECGs based on the averaged P-QRS-T complexes from a variety of existing ECG signals, with simulated intrinsic (QT interval) and extrinsic (noise, baseline wander, signal length) variations. We quantified QTV by a commonly used measure, short-term QT variability (STV). Using 28,800 simulated ECGs, we assessed the performance of a conventional QTV measurement algorithm, resembling a manual QTV measurement approach, and a more advanced algorithm based on fiducial segment averaging (FSA). Results The results for the conventional algorithm show considerable median absolute differences between the simulated and estimated STV. For the highest noise level, median differences were 4±6 ms in the absence of QTV. Increasing signal length generally yields more accurate STV estimates, but the difference in performance between 30 or 60 beats is small. The FSA algorithm proved to be very accurate, with most median absolute differences less than 0.5 ms, even for the highest levels of disturbance. Conclusions Artificially constructed ECGs with a variety of disturbances allow validation of QTV measurement procedures. The FSA algorithm provides highly accurate STV estimates under varying signal conditions, and performs much better than traditional beat-by-beat analysis. The fully automatic operation of the FSA algorithm enables STV measurement in large sets of ECGs
TASKA: A modular task management system to support health research studies
Background: Many healthcare databases have been routinely collected over the past decades, to support clinical practice and administrative services. However, their secondary use for research is often hindered by restricted governance rules. Furthermore, health research studies typically involve many participants with complementary roles and responsibilities which require proper process management. Results: From a wide set of requirements collected from European clinical studies, we developed TASKA, a task/workflow management system that helps to cope with the socio-technical issues arising when dealing with multidisciplinary and multi-setting clinical studies. The system is based on a two-layered architecture: 1) the backend engine, which follows a micro-kernel pattern, for extensibility, and RESTful web services, for decoupling from the web clients; 2) and the client, entirely developed in ReactJS, allowing the construction and management of studies through a graphical interface. TASKA is a GNU GPL open source project, accessible at https://github.com/bioinformatics-ua/taska. A demo version is also available at https://bioinformatics.ua.pt/taska. Conclusions: The system is currently used to support feasibility studies across several institutions and countries, in the context of the European Medical Information Framework (EMIF) project. The tool was shown to simplify the set-up of health studies, the management of participants and their roles, as well as the overall governance process
A standardized analytics pipeline for reliable and rapid development and validation of prediction models using observational health data
Background and objective: As a response to the ongoing COVID-19 pandemic, several prediction models in the existing literature were rapidly developed, with the aim of providing evidence-based guidance. However, none of these COVID-19 prediction models have been found to be reliable. Models are commonly assessed to have a risk of bias, often due to insufficient reporting, use of non-representative data, and lack of large-scale external validation. In this paper, we present the Observational Health Data Sciences and Informatics (OHDSI) analytics pipeline for patient-level prediction modeling as a standardized approach for rapid yet reliable development and validation of prediction models. We demonstrate how our analytics pipeline and open-source software tools can be used to answer important prediction questions while limiting potential causes of bias (e.g., by validating phenotypes, specifying the target population, performing large-scale external validation, and publicly providing all analytical source code). Methods: We show step-by-step how to implement the analytics pipeline for the question: ‘In patients hospitalized with COVID-19, what is the risk of death 0 to 30 days after hospitalization?’. We develop models using six different machine learning methods in a USA claims database containing over 20,000 COVID-19 hospitalizations and externally validate the models using data containing over 45,000 COVID-19 hospitalizations from South Korea, Spain, and the USA. Results: Our open-source software tools enabled us to efficiently go end-to-end from problem design to reliable Model Development and evaluation. When predicting death in patients hospitalized with COVID-19, AdaBoost, random forest, gradient boosting machine, and decision tree yielded similar or lower internal and external validation discrimination performance compared to L1-regularized logistic regression, whereas the MLP neural network consistently resulted in lower discrimination. L1-regularized logistic regression models were well calibrated. Conclusion: Our results show that following the OHDSI analytics pipeline for patient-level prediction modelling can enable the rapid development towards reliable prediction models. The OHDSI software tools and pipeline are open source and available to researchers from all around the world.</p
Finding a short and accurate decision rule in disjunctive normal form by exhaustive search
Greedy approaches suffer from a restricted search space which could lead to suboptimal classifiers in terms of performance and classifier size. This study discusses exhaustive search as an alternative to greedy search for learning short and accurate decision rules. The Exhaustive Procedure for LOgic-Rule Extraction (EXPLORE) algorithm is presented, to induce decision rules in disjunctive normal form (DNF) in a systematic and efficient manner. We propose a method based on subsumption to reduce the number of values considered for instantiation in the literals, by taking into account the relational operator without loss of performance. Furthermore, we describe a branch-and-bound approach that makes optimal use of user-defined performance constraints. To improve the generalizability we use a validation set to determine the optimal length of the DNF rule. The performance and size of the DNF rules induced by EXPLORE are compared to those of eight well-known rule learners. Our results show that an exhaustive approach to rule learning in DNF results in significantly smaller classifiers than those of the other rule learners, while securing comparable or even better performance. Clearly, exhaustive search is computer-intensive and may not always be feasible. Nevertheless, based on this study, we believe that exhaustive search should be considered an alternative for greedy search in many problems
Real-world treatment trajectories of adults with newly diagnosed asthma or COPD
Background There is a lack of knowledge on how patients with asthma or chronic obstructive pulmonary disease (COPD) are globally treated in the real world, especially with regard to the initial pharmacological treatment of newly diagnosed patients and the different treatment trajectories. This knowledge is important to monitor and improve clinical practice. Methods This retrospective cohort study aims to characterise treatments using data from four claims (drug dispensing) and four electronic health record (EHR; drug prescriptions) databases across six countries and three continents, encompassing 1.3 million patients with asthma or COPD. We analysed treatment trajectories at drug class level from first diagnosis and visualised these in sunburst plots. Results In four countries (USA, UK, Spain and the Netherlands), most adults with asthma initiate treatment with short-acting ß2 agonists monotherapy (20.8%-47.4% of first-line treatments). For COPD, the most frequent first-line treatment varies by country. The largest percentages of untreated patients (for asthma and COPD) were found in claims databases (14.5%-33.2% for asthma and 27.0%-52.2% for COPD) from the USA as compared with EHR databases (6.9%-15.2% for asthma and 4.4%-17.5% for COPD) from European countries. The treatment trajectories showed step-up as well as step-down in treatments. Conclusion Real-world data from claims and EHRs indicate that first-line treatments of asthma and COPD vary widely across countries. We found evidence of a stepwise approach in the pharmacological treatment of asthma and COPD, suggesting that treatments may be tailored to patients' needs.</p
Real-world treatment trajectories of adults with newly diagnosed asthma or COPD
Background There is a lack of knowledge on how patients with asthma or chronic obstructive pulmonary disease (COPD) are globally treated in the real world, especially with regard to the initial pharmacological treatment of newly diagnosed patients and the different treatment trajectories. This knowledge is important to monitor and improve clinical practice. Methods This retrospective cohort study aims to characterise treatments using data from four claims (drug dispensing) and four electronic health record (EHR; drug prescriptions) databases across six countries and three continents, encompassing 1.3 million patients with asthma or COPD. We analysed treatment trajectories at drug class level from first diagnosis and visualised these in sunburst plots. Results In four countries (USA, UK, Spain and the Netherlands), most adults with asthma initiate treatment with short-acting ß2 agonists monotherapy (20.8%-47.4% of first-line treatments). For COPD, the most frequent first-line treatment varies by country. The largest percentages of untreated patients (for asthma and COPD) were found in claims databases (14.5%-33.2% for asthma and 27.0%-52.2% for COPD) from the USA as compared with EHR databases (6.9%-15.2% for asthma and 4.4%-17.5% for COPD) from European countries. The treatment trajectories showed step-up as well as step-down in treatments. Conclusion Real-world data from claims and EHRs indicate that first-line treatments of asthma and COPD vary widely across countries. We found evidence of a stepwise approach in the pharmacological treatment of asthma and COPD, suggesting that treatments may be tailored to patients' needs.</p
Predictive approaches to heterogeneous treatment effects: a scoping review
Background: Recent evidence suggests that there is often substantial variation in the benefits and harms across a
trial population. We aimed to identify regression modeling approaches that assess heterogeneity of treatment effect
within a randomized clinical trial.
Methods: We performed a literature review using a broad search strategy, complemented by suggestions of a
technical expert panel.
Results: The approaches are classified into 3 categories: 1) Risk-based methods (11 papers) use only prognostic factors
to define patient subgroups, relying on the mathematical dependency of the absolute risk difference on baseline risk;
2) Treatment effect modeling methods (9 papers) use both prognostic factors and treatment effect modifiers to
explore characteristics that interact with the effects of therapy on a relative scale. These methods couple data-driven
subgroup identification with approaches to prevent overfitting, such as penalization or use of separate data sets for
subgroup identification and effect estimation. 3) Optimal treatment regime methods (12 papers) focus primarily on
treatment effect modifiers to classify the trial population into those who benefit from treatment and those who do not.
Finally, we also identified papers which describe model evaluation methods (4 papers).
Conclusions: Three classes of approaches were identified to assess heterogeneity of treatment effect. Methodological
research, including both simulations and empirical evaluations, is required to compare the available methods in
different settings and to derive well-informed guidance for their application in RCT analysis
- …