41 research outputs found

    Automatic interpretation of pediatric electrocardiograms

    Get PDF
    The year 1902 saw the birth of clinical electrocardiography when Willem Einthoven published the first electrocardiogram (ECG) of unprecedented quality recorded with his newly invented string- galvanometer [1]. The foundations of electrocardiographic diagnosis were laid in the half century that followed. After the second world war electronic pen-writing recorders made their appearance and quickly pushed the bulky string galvanometers from the scene, notwithstanding a far inferior frequency response. Standards for performancewere then issued thatwere unfortunately based on the frequency characteristics of this type of equipment. We will return to this subject in the chapter on theminimum bandwidth requirements for the recording of pediatric ECGs

    Minimum bandwidth requirements for recording of pediatric electrocardiograms

    Get PDF
    BACKGROUND: Previous studies that determined the frequency content of the pediatric ECG had their limitations: the study population was small or the sampling frequency used by the recording system was low. Therefore, current bandwidth recommendations for recording pediatric ECGs are not well founded. We wanted to establish minimum bandwidth requirements using a large set of pediatric ECGs recorded at a high sampling rate. METHODS AND RESULTS: For 2169 children aged 1 day to 16 years, a 12-lead ECG was recorded at a sampling rate of 1200 Hz. The averaged beats of each ECG were passed through digital filters with different cut off points (50 to 300 Hz in 25-Hz steps). We measured the absolute errors in maximum QRS amplitude for each simulated bandwidth and determined the percentage of records with an error >25 microV. We found that in any lead, a bandwidth of 250 Hz yields amplitude errors 95% of the children <1 year. For older children, a gradual decrease in ECG frequency content was demonstrated. CONCLUSIONS: We recommend a minimum bandwidth of 250 Hz to record pediatric ECGs. This bandwidth is considerably higher than the previous recommendation of 150 Hz from the American Heart Association

    Identifying the DEAD: Development and Validation of a Patient-Level Model to Predict Death Status in Population-Level Claims Data

    Get PDF
    Introduction US claims data contain medical data on large heterogeneous populations and are excellent sources for medical research. Some claims data do not contain complete death records, limiting their use for mortality or mortality-related studies. A model to predict whether a patient died at the end of the follow-up time (referred to as the end of observation) is needed to enable mortality-related studies. Objective The objective of this study was to develop a patient-level model to predict whether the end of observation was due to death in US claims data. Methods We used a claims dataset with full death records, Optum© De-Identifed Clinformatics® Data-Mart-Database—Date of Death mapped to the Observational Medical Outcome Partnership common data model, to develop a model that classifes the end of observations into death or non-death. A regularized logistic regression was trained using 88,514 predictors (recorded within the prior 365 or 30 days) and externally validated by applying the model to three US claims datasets. Results Approximately 25 in 1000 end of observations in Optum are due to death. The Discriminating End of observation into Alive and Dead (DEAD) model obtained an area under the receiver operating characteristic curve of 0.986. When defning death as a predicted risk of>0.5, only 2% of the end of observations were predicted to be due to death and the model obtained a sensitivity of 62% and a positive predictive value of 74.8%. The external validation showed the model was transportable, with area under the receiver operating characteristic curves ranging between 0.951 and 0.995 across the US claims databases. Conclusions US claims data often lack complete death records. The DEAD model can be used to impute death at various sensitivity, specifcity, or positive predictive values depending on the use of the model. The DEAD model can be readily applied to any observational healthcare database mapped to the Observational Medical Outcome Partnership common data model and is available from https://github.com/OHDSI/StudyProtocolSandbox/tree/master/DeadModel

    Effects of fluticasone propionate on methacholine dose-response curves in nonsmoking atopic asthmatics

    Get PDF
    Methacholine is frequently used to determine bronchial hyperresponsiveness (BHR) and to generate dose-response curves. These curves are characterized by a threshold (provocative concentration of methacholine producing a 20% fall in forced expiratory volume in one second (PC20) = sensitivity), slope (reactivity) and maximal response (plateau). We investigated the efficacy of 12 weeks of treatment with 1,000 microg fluticasone propionate in a double-blind, placebo-controlled study in 33 atopic asthmatics. The outcome measures used were the influence on BHR and the different indices of the methacholine dose-response (MDR) curve. After 2 weeks run-in, baseline lung function data were obtained and a MDR curve was measured with doubling concentrations of the methacholine from 0.03 to 256 mg x mL(-1). MDR curves were repeated after 6 and 12 weeks. A recently developed, sigmoid cumulative Gaussian distribution function was fitted to the data. Although sensitivity was obtained by linear interpolation of two successive log2 concentrations, reactivity, plateau and the effective concentration at 50% of the plateau value (EC50) were obtained as best fit parameters. In the fluticasone group, significant changes occurred after 6 weeks with respect to means of PC20 (an increase of 3.4 doubling doses), plateau value fall in forced expiratory volume in one second (FEV1) (from 58% at randomization to 41% at 6 weeks) and baseline FEV1 (from 3.46 to 3.75 L) in contrast to the placebo group. Stabilization occurred after 12 weeks. Changes for reactivity were less marked, whereas changes in log, EC50 were not significantly different between the groups. We conclude that fluticasone is very effective in decreasing the maximal airway narrowing response and in increasing PC20. However, it is likely that part of this increase is related to the decrease of the plateau of maximal response

    Pharmacogenetics of Drug-Induced QT Interval Prolongation: An Update

    Get PDF
    A prolonged QT interval is an important risk factor for ventricular arrhythmias and sudden cardiac death. QT prolongation can be caused by drugs. There are multiple risk factors for drug-induced QT prolongation, including genetic variation. QT prolongation is one of the most common reasons for withdrawal of

    The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies

    Get PDF
    Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to rese

    Female Reproductive Performance and Maternal Birth Month: A Comprehensive Meta-Analysis Exploring Multiple Seasonal Mechanisms

    Get PDF
    Globally, maternal birth season affects fertility later in life. The purpose of this systematic literature review is to comprehensively investigate the birth season and female fertility relationship. Using PubMed, we identified a set of 282 relevant fertility/birth season papers published between 1972 and 2018. We screened all 282 studies and removed 13

    Validation of automatic measurement of QT interval variability

    Get PDF
    Background Increased variability of beat-to-beat QT-interval durations on the electrocardiogram (ECG) has been associated with increased risk for fatal and non-fatal cardiac events. However, techniques for the measurement of QT variability (QTV) have not been validated since a gold standard is not available. In this study, we propose a validation method and illustrate its use for the validation of two automatic QTV measurement techniques. Methods Our method generates artificial standard 12-lead ECGs based on the averaged P-QRS-T complexes from a variety of existing ECG signals, with simulated intrinsic (QT interval) and extrinsic (noise, baseline wander, signal length) variations. We quantified QTV by a commonly used measure, short-term QT variability (STV). Using 28,800 simulated ECGs, we assessed the performance of a conventional QTV measurement algorithm, resembling a manual QTV measurement approach, and a more advanced algorithm based on fiducial segment averaging (FSA). Results The results for the conventional algorithm show considerable median absolute differences between the simulated and estimated STV. For the highest noise level, median differences were 4±6 ms in the absence of QTV. Increasing signal length generally yields more accurate STV estimates, but the difference in performance between 30 or 60 beats is small. The FSA algorithm proved to be very accurate, with most median absolute differences less than 0.5 ms, even for the highest levels of disturbance. Conclusions Artificially constructed ECGs with a variety of disturbances allow validation of QTV measurement procedures. The FSA algorithm provides highly accurate STV estimates under varying signal conditions, and performs much better than traditional beat-by-beat analysis. The fully automatic operation of the FSA algorithm enables STV measurement in large sets of ECGs

    Prediction of RNA-protein sequence and structure binding preferences using deep convolutional and recurrent neural networks

    Get PDF
    Background: RNA regulation is significantly dependent on its binding protein partner, known as the RNA-binding proteins (RBPs). Unfortunately, the binding preferences for most RBPs are still not well characterized. Interdependencies between sequence and secondary structure specificities is challenging for both predicting RBP binding sites and accurate sequence and structure motifs detection. Results: In this study, we propose a deep learning-based method, iDeepS, to simultaneously identify the binding sequence and structure motifs from RNA sequences using convolutional neural networks (CNNs) and a bidirectional long short term memory network (BLSTM). We first perform one-hot encoding for both the sequence and predicted secondary structure, to enable subsequent convolution operations. To reveal the hidden binding knowledge from the observed sequences, the CNNs are applied to learn the abstract features. Considering the close relationship between sequence and predicted structures, we use the BLSTM to capture possible long range dependencies between binding sequence and structure motifs identified by the CNNs. Finally, the learned weighted representations are fed into a classification layer to predict the RBP binding sites. We evaluated iDeepS on verified RBP binding sites derived from large-scale representative CLIP-seq datasets. The results demonstrate that iDeepS can reliably predict the RBP binding sites on RNAs, and outperforms the state-of-the-art methods. An important advantage compared to other methods is that iDeepS can automatically extract both binding sequence and structure motifs, which will improve our understanding of the mechanisms of binding specificities of RBPs. Conclusion: Our study shows that the iDeepS method identifies the sequence and structure motifs to accurately predict RBP binding sites. iDeepS is available at https://github.com/xypan1232/iDeepS

    Design and implementation of a standardized framework to generate and evaluate patient-level prediction models using observational healthcare data

    Get PDF
    Objective: To develop a conceptual prediction model framework containing standardized steps and describe the corresponding open-source software developed to consistently implement the framework across computational environments and observational healthcare databases to enable model sharing and reproducibility. Methods: Based on existing best practices we propose a 5 step standardized framework for: (1) transparently defining the problem; (2) selecting suitable datasets; (3) constructing variables from the observational data; (4) learning the predictive model; and (5) validating the model performance. We implemented this framework as open-source software utilizing the Observational Medical Outcomes Partnership Common Data Model to enable convenient sharing of models and reproduction of model evaluation across multiple observational datasets. The software implementation contains default covariates and classifiers but the framework enables customization and extension. Results: As a proof-of-concept, demonstrating the transparency and ease of model dissemination using the software, we developed prediction models for 21 different outcomes within a target population of people suffering from depression across 4 observational databases. All 84 models are available in an accessible online repository to be implemented by anyone with access to an observational database in the Common DataModel format. Conclusions: The proof-of-concept study illustrates the framework's ability to develop reproducible models that can be readily shared and offers the potential to perform extensive external validation of models, and improve their likelihood of clinical uptake. In future work the framework will be applied to perform an "all-by-all" prediction analysis to assess the observational data prediction domain across numerous target populations, outcomes and time, and risk settings
    corecore