264 research outputs found

    Minimum bandwidth requirements for recording of pediatric electrocardiograms

    Get PDF
    BACKGROUND: Previous studies that determined the frequency content of the pediatric ECG had their limitations: the study population was small or the sampling frequency used by the recording system was low. Therefore, current bandwidth recommendations for recording pediatric ECGs are not well founded. We wanted to establish minimum bandwidth requirements using a large set of pediatric ECGs recorded at a high sampling rate. METHODS AND RESULTS: For 2169 children aged 1 day to 16 years, a 12-lead ECG was recorded at a sampling rate of 1200 Hz. The averaged beats of each ECG were passed through digital filters with different cut off points (50 to 300 Hz in 25-Hz steps). We measured the absolute errors in maximum QRS amplitude for each simulated bandwidth and determined the percentage of records with an error >25 microV. We found that in any lead, a bandwidth of 250 Hz yields amplitude errors 95% of the children <1 year. For older children, a gradual decrease in ECG frequency content was demonstrated. CONCLUSIONS: We recommend a minimum bandwidth of 250 Hz to record pediatric ECGs. This bandwidth is considerably higher than the previous recommendation of 150 Hz from the American Heart Association

    Comparing penalization methods for linear models on large observational health data

    Get PDF
    Objective: This study evaluates regularization variants in logistic regression (L1, L2, ElasticNet, Adaptive L1, Adaptive ElasticNet, Broken adaptive ridge [BAR], and Iterative hard thresholding [IHT]) for discrimination and calibration performance, focusing on both internal and external validation. Materials and Methods: We use data from 5 US claims and electronic health record databases and develop models for various outcomes in a major depressive disorder patient population. We externally validate all models in the other databases. We use a train-test split of 75%/25% and evaluate performance with discrimination and calibration. Statistical analysis for difference in performance uses Friedman's test and critical difference diagrams. Results: Of the 840 models we develop, L1 and ElasticNet emerge as superior in both internal and external discrimination, with a notable AUC difference. BAR and IHT show the best internal calibration, without a clear external calibration leader. ElasticNet typically has larger model sizes than L1. Methods like IHT and BAR, while slightly less discriminative, significantly reduce model complexity. Conclusion:L1 and ElasticNet offer the best discriminative performance in logistic regression for healthcare predictions, maintaining robustness across validations. For simpler, more interpretable models, L0-based methods (IHT and BAR) are advantageous, providing greater parsimony and calibration with fewer features. This study aids in selecting suitable regularization techniques for healthcare prediction models, balancing performance, complexity, and interpretability.</p

    Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data

    Get PDF
    Background: There is currently no consensus on the impact of class imbalance methods on the performance of clinical prediction models. We aimed to empirically investigate the impact of random oversampling and random undersampling, two commonly used class imbalance methods, on the internal and external validation performance of prediction models developed using observational health data. Methods: We developed and externally validated prediction models for various outcomes of interest within a target population of people with pharmaceutically treated depression across four large observational health databases. We used three different classifiers (lasso logistic regression, random forest, XGBoost) and varied the target imbalance ratio. We evaluated the impact on model performance in terms of discrimination and calibration. Discrimination was assessed using the area under the receiver operating characteristic curve (AUROC) and calibration was assessed using calibration plots. Results: We developed and externally validated a total of 1,566 prediction models. On internal and external validation, random oversampling and random undersampling generally did not result in higher AUROCs. Moreover, we found overestimated risks, although this miscalibration could largely be corrected by recalibrating the models towards the imbalance ratios in the original dataset. Conclusions: Overall, we found that random oversampling or random undersampling generally does not improve the internal and external validation performance of prediction models developed in large observational health databases. Based on our findings, we do not recommend applying random oversampling or random undersampling when developing prediction models in large observational health databases.</p

    Using clinical text to refine unspecific condition codes in Dutch general practitioner EHR data

    Get PDF
    Objective: Observational studies using electronic health record (EHR) databases often face challenges due to unspecific clinical codes that can obscure detailed medical information, hindering precise data analysis. In this study, we aimed to assess the feasibility of refining these unspecific condition codes into more specific codes in a Dutch general practitioner (GP) EHR database by leveraging the available clinical free text. Methods: We utilized three approaches for text classification—search queries, semi-supervised learning, and supervised learning—to improve the specificity of ten unspecific International Classification of Primary Care (ICPC-1) codes. Two text representations and three machine learning algorithms were evaluated for the (semi-)supervised models. Additionally, we measured the improvement achieved by the refinement process on all code occurrences in the database. Results: The classification models performed well for most codes. In general, no single classification approach consistently outperformed the others. However, there were variations in the relative performance of the classification approaches within each code and in the use of different text representations and machine learning algorithms. Class imbalance and limited training data affected the performance of the (semi-)supervised models, yet the simple search queries remained particularly effective. Ultimately, the developed models improved the specificity of over half of all the unspecific code occurrences in the database. Conclusions: Our findings show the feasibility of using information from clinical text to improve the specificity of unspecific condition codes in observational healthcare databases, even with a limited range of machine-learning techniques and modest annotated training sets. Future work could investigate transfer learning, integration of structured data, alternative semi-supervised methods, and validation of models across healthcare settings. The improved level of detail enriches the interpretation of medical information and can benefit observational research and patient care.</p

    Pharmacogenetics of Drug-Induced QT Interval Prolongation: An Update

    Get PDF
    A prolonged QT interval is an important risk factor for ventricular arrhythmias and sudden cardiac death. QT prolongation can be caused by drugs. There are multiple risk factors for drug-induced QT prolongation, including genetic variation. QT prolongation is one of the most common reasons for withdrawal of

    Using clinical text to refine unspecific condition codes in Dutch general practitioner EHR data

    Get PDF
    Objective: Observational studies using electronic health record (EHR) databases often face challenges due to unspecific clinical codes that can obscure detailed medical information, hindering precise data analysis. In this study, we aimed to assess the feasibility of refining these unspecific condition codes into more specific codes in a Dutch general practitioner (GP) EHR database by leveraging the available clinical free text. Methods: We utilized three approaches for text classification—search queries, semi-supervised learning, and supervised learning—to improve the specificity of ten unspecific International Classification of Primary Care (ICPC-1) codes. Two text representations and three machine learning algorithms were evaluated for the (semi-)supervised models. Additionally, we measured the improvement achieved by the refinement process on all code occurrences in the database. Results: The classification models performed well for most codes. In general, no single classification approach consistently outperformed the others. However, there were variations in the relative performance of the classification approaches within each code and in the use of different text representations and machine learning algorithms. Class imbalance and limited training data affected the performance of the (semi-)supervised models, yet the simple search queries remained particularly effective. Ultimately, the developed models improved the specificity of over half of all the unspecific code occurrences in the database. Conclusions: Our findings show the feasibility of using information from clinical text to improve the specificity of unspecific condition codes in observational healthcare databases, even with a limited range of machine-learning techniques and modest annotated training sets. Future work could investigate transfer learning, integration of structured data, alternative semi-supervised methods, and validation of models across healthcare settings. The improved level of detail enriches the interpretation of medical information and can benefit observational research and patient care.</p

    90-Day all-cause mortality can be predicted following a total knee replacement:an international, network study to develop and validate a prediction model

    Get PDF
    Purpose: The purpose of this study was to develop and validate a prediction model for 90-day mortality following a total knee replacement (TKR). TKR is a safe and cost-effective surgical procedure for treating severe knee osteoarthritis (OA). Although complications following surgery are rare, prediction tools could help identify high-risk patients who could be targeted with preventative interventions. The aim was to develop and validate a simple model to help inform treatment choices. Methods: A mortality prediction model for knee OA patients following TKR was developed and externally validated using a US claims database and a UK general practice database. The target population consisted of patients undergoing a primary TKR for knee OA, aged ≥ 40 years and registered for ≥ 1 year before surgery. LASSO logistic regression models were developed for post-operative (90-day) mortality. A second mortality model was developed with a reduced feature set to increase interpretability and usability. Results: A total of 193,615 patients were included, with 40,950 in The Health Improvement Network (THIN) database and 152,665 in Optum. The full model predicting 90-day mortality yielded AUROC of 0.78 when trained in OPTUM and 0.70 when externally validated on THIN. The 12 variable model achieved internal AUROC of 0.77 and external AUROC of 0.71 in THIN. Conclusions: A simple prediction model based on sex, age, and 10 comorbidities that can identify patients at high risk of short-term mortality following TKR was developed that demonstrated good, robust performance. The 12-feature mortality model is easily implemented and the performance suggests it could be used to inform evidence based shared decision-making prior to surgery and targeting prophylaxis for those at high risk. Level of evidence: III.</p

    Female Reproductive Performance and Maternal Birth Month: A Comprehensive Meta-Analysis Exploring Multiple Seasonal Mechanisms

    Get PDF
    Globally, maternal birth season affects fertility later in life. The purpose of this systematic literature review is to comprehensively investigate the birth season and female fertility relationship. Using PubMed, we identified a set of 282 relevant fertility/birth season papers published between 1972 and 2018. We screened all 282 studies and removed 13

    Blood pressure measurements for diagnosing hypertension in primary care:room for improvement

    Get PDF
    Background: In the adult population, about 50% have hypertension, a risk factor for cardiovascular disease and subsequent premature death. Little is known about the quality of the methods used to diagnose hypertension in primary care. Objectives: The objective was to assess the frequency of use of recognized methods to establish a diagnosis of hypertension, and specifically for OBPM, whether three distinct measurements were taken, and how correctly the blood pressure levels were interpreted. Methods: A retrospective population-based cohort study using electronic medical records of patients aged between 40 and 70 years, who visited their general practitioner (GP) with a new-onset of hypertension in the years 2012, 2016, 2019, and 2020. A visual chart review of the electronic medical records was used to assess the methods employed to diagnose hypertension in a random sample of 500 patients. The blood pressure measurement method was considered complete if three or more valid office blood pressure measurements (OBPM) were performed, or home-based blood pressure measurements (HBPM), the office- based 30-minute method (OBP30), or 24-hour ambulatory blood pressure measurements (24 H-ABPM) were used. Results: In all study years, OBPM was the most frequently used method to diagnose new-onset hypertension in patients. The OBP-30 method was used in 0.4% (2012), 4.2% (2016), 10.6% (2019), and 9.8% (2020) of patients respectively, 24 H-ABPM in 16.0%, 22.2%, 17.2%, and 19.0% of patients and HBPM measurements in 5.4%, 8.4%, 7.6%, and 7.8% of patients, respectively. A diagnosis of hypertension based on only one or two office measurements occurred in 85.2% (2012), 87.9% (2016), 94.4% (2019), and 96.8% (2020) of all patients with OBPM. In cases of incomplete measurement and incorrect interpretation, medication was still started in 64% of cases in 2012, 56% (2016), 60% (2019), and 73% (2020). Conclusion: OBPM is still the most often used method to diagnose hypertension in primary care. The diagnosis was often incomplete or misinterpreted using incorrect cut-off levels. A small improvement occurred between 2012 and 2016 but no further progress was seen in 2019 or 2020. If hypertension is inappropriately diagnosed, it may result in under treatment or in prolonged, unnecessary treatment of patients. There is room for improvement in the general practice setting.</p

    A New Mechanism for Interpreting the Motion of Auroral Arcs in the Nightside Ionosphere

    Get PDF
    Abstract. A new mechanism is proposed for predicting and interpreting the motion of auroral arcs observed in the nightside ionosphere during the expansion phase of a substorm. This mechanism is centred on the idea that such arcs act as visible manifestations of the arrival of earthwardpropagating shock waves in the near-Earth magnetosphere. These shock waves are generated at a near-Earth X-line, and propagate at the local Alfv6n speed. Because of the non-uniform nature of the magnetised plasma in the magnetotail, dispersion results in a change in the shape of the wave fronts as the shocks propagate towards the ionosphere. Theoretical analysis shows that a variety of arc motions can occur as a result of this dispersion, depending on factors such as the reconnection rate, the location of the reconnection site, and gradients in the magnetic field strength and plasma density
    corecore