18,832 research outputs found

    Detection of atrial fibrillation episodes in long-term heart rhythm signals using a support vector machine

    Get PDF
    Atrial fibrillation (AF) is a serious heart arrhythmia leading to a significant increase of the risk for occurrence of ischemic stroke. Clinically, the AF episode is recognized in an electrocardiogram. However, detection of asymptomatic AF, which requires a long-term monitoring, is more efficient when based on irregularity of beat-to-beat intervals estimated by the heart rate (HR) features. Automated classification of heartbeats into AF and non-AF by means of the Lagrangian Support Vector Machine has been proposed. The classifier input vector consisted of sixteen features, including four coefficients very sensitive to beat-to-beat heart changes, taken from the fetal heart rate analysis in perinatal medicine. Effectiveness of the proposed classifier has been verified on the MIT-BIH Atrial Fibrillation Database. Designing of the LSVM classifier using very large number of feature vectors requires extreme computational efforts. Therefore, an original approach has been proposed to determine a training set of the smallest possible size that still would guarantee a high quality of AF detection. It enables to obtain satisfactory results using only 1.39% of all heartbeats as the training data. Post-processing stage based on aggregation of classified heartbeats into AF episodes has been applied to provide more reliable information on patient risk. Results obtained during the testing phase showed the sensitivity of 98.94%, positive predictive value of 98.39%, and classification accuracy of 98.86%.Web of Science203art. no. 76

    Computer Aided ECG Analysis - State of the Art and Upcoming Challenges

    Full text link
    In this paper we present current achievements in computer aided ECG analysis and their applicability in real world medical diagnosis process. Most of the current work is covering problems of removing noise, detecting heartbeats and rhythm-based analysis. There are some advancements in particular ECG segments detection and beat classifications but with limited evaluations and without clinical approvals. This paper presents state of the art advancements in those areas till present day. Besides this short computer science and signal processing literature review, paper covers future challenges regarding the ECG signal morphology analysis deriving from the medical literature review. Paper is concluded with identified gaps in current advancements and testing, upcoming challenges for future research and a bullseye test is suggested for morphology analysis evaluation.Comment: 7 pages, 3 figures, IEEE EUROCON 2013 International conference on computer as a tool, 1-4 July 2013, Zagreb, Croati

    Review and classification of variability analysis techniques with clinical applications

    Get PDF
    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis

    Design, Evaluation, and Application of Heart Rate Variability Analysis Software (HRVAS)

    Get PDF
    The analysis of heart rate variability (HRV) has become an increasingly popular and important tool for studying many disease pathologies in the past twenty years. HRV analyses are methods used to non-invasively quantify variability within heart rate. Purposes of this study were to design, evaluate, and apply an easy to use and open-source HRV analysis software package (HRVAS). HRVAS implements four major categories of HRV techniques: statistical and time-domain analysis, frequency-domain analysis, nonlinear analysis, and time-frequency analysis. Software evaluations were accomplished by performing HRV analysis on simulated and public congestive heart failure (CHF) data. Application of HRVAS included studying the effects of hyperaldosteronism on HRV in rats. Simulation and CHF results demonstrated that HRVAS was a dependable HRV analysis tool. Results from the rat hyperaldosteronism model showed that 5 of 26 HRV measures were statistically significant (p\u3c0.05). HRVAS provides a useful tool for HRV analysis to researchers

    Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'.</p> <p>Methods</p> <p>The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples.</p> <p>Results</p> <p>For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684.</p> <p>Conclusions</p> <p>Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals.</p

    Artificial intelligence based ECG signal classification of sendetary, smokers and athletes

    Get PDF
    The current study deals with the design of a computer aided diagnosis procedure to classify 3 groups of people with different lifestyles, namely sedentary, smoker and athletes. The ECG Classification based on statistical analysis of HRV and ECG features. The heart rate variability (HRV) parameters and ECG statistical features were used for the pattern recognition in Artificial Intelligence classifiers. The ECG was recorded for a particular time duration using the EKG sensor. The HRV, time domain and wavelet parameters were calculated using NI BIOMEDICAL STARTUP KIT 3.0 and LABVIEW 2010. The important HRV features, time domain and wavelet features were calculated by the statistical non-linear classifiers (CART and BT).the important parameters were fed as input to artificial intelligence classifiers like ANN and SVM. The Artificial Intelligence classifiers like artificial neural network (ANN) and Support vector Machine (SVM) were used to classify 60 numbers of ECG signal. It was observed from result that the Multi layer perceptron (MLP) based ANN classifier gives an accuracy of 95%, which is highest among other the classifiers. The HRV study implies that the time domain parameters (RMSSD and PNN50), frequency domain parameters (HF power and LF/HF peak), Poincare parameter (SD1) and geometric parameters (RR triangular index and TINN) are higher in athlete class and lower in smoker class. The Higher values of HRV parameters indicate increase in parasympathetic activity and decrease in sympathetic activity of the ANS. This indicates that the athlete class has better heath and less chance of cardiovascular diseases where smoker class has high chances of cardiovascular diseases. These HRV parameters of sedentary class were higher than smoker class but lower than athlete class. This indicates less chances of cardiovascular disease in sedentary class as compared to smoker class

    Machine learning on cardiotocography data to classify fetal outcomes: A scoping review

    Get PDF
    Introduction: Uterine contractions during labour constrict maternal blood flow and oxygen delivery to the developing baby, causing transient hypoxia. While most babies are physiologically adapted to withstand such intrapartum hypoxia, those exposed to severe hypoxia or with poor physiological reserves may experience neurological injury or death during labour. Cardiotocography (CTG) monitoring was developed to identify babies at risk of hypoxia by detecting changes in fetal heart rate (FHR) patterns. CTG monitoring is in widespread use in intrapartum care for the detection of fetal hypoxia, but the clinical utility is limited by a relatively poor positive predictive value (PPV) of an abnormal CTG and significant inter and intra observer variability in CTG interpretation. Clinical risk and human factors may impact the quality of CTG interpretation. Misclassification of CTG traces may lead to both under-treatment (with the risk of fetal injury or death) or over-treatment (which may include unnecessary operative interventions that put both mother and baby at risk of complications). Machine learning (ML) has been applied to this problem since early 2000 and has shown potential to predict fetal hypoxia more accurately than visual interpretation of CTG alone. To consider how these tools might be translated for clinical practice, we conducted a review of ML techniques already applied to CTG classification and identified research gaps requiring investigation in order to progress towards clinical implementation. Materials and method: We used identified keywords to search databases for relevant publications on PubMed, EMBASE and IEEE Xplore. We used Preferred Reporting Items for Systematic Review and Meta-Analysis for Scoping Reviews (PRISMA-ScR). Title, abstract and full text were screened according to the inclusion criteria. Results: We included 36 studies that used signal processing and ML techniques to classify CTG. Most studies used an open-access CTG database and predominantly used fetal metabolic acidosis as the benchmark for hypoxia with varying pH levels. Various methods were used to process and extract CTG signals and several ML algorithms were used to classify CTG. We identified significant concerns over the practicality of using varying pH levels as the CTG classification benchmark. Furthermore, studies needed to be more generalised as most used the same database with a low number of subjects for an ML study. Conclusion: ML studies demonstrate potential in predicting fetal hypoxia from CTG. However, more diverse datasets, standardisation of hypoxia benchmarks and enhancement of algorithms and features are needed for future clinical implementation.</p

    Doctor of Philosophy in Computing

    Get PDF
    dissertationStatistical shape analysis has emerged as an important tool for the quantitative analysis of anatomy in many medical imaging applications. The correspondence based approach to evaluate shape variability is a popular method, based on comparing configurations of carefully placed landmarks on each shape. In recent years, methods for automatic placement of landmarks have enhanced the ability of this approach to capture statistical properties of shape populations. However, biomedical shapes continue to present considerable difficulties in automatic correspondence optimization due to inherent geometric complexity and the need to correlate shape change with underlying biological parameters. This dissertation addresses these technical difficulties and presents improved shape correspondence models. In particular, this dissertation builds on the particle-based modeling (PBM) framework described by Joshua Cates' 2010 Ph.D. dissertation. In the PBM framework, correspondences are modeled as a set of dynamic points or a particle system, positioned automatically on shape surfaces by optimizing entropy contained in the model, with the idea of balancing model simplicity against accuracy of the particle system representation of shapes. This dissertation is a collection of four papers that extend the PBM framework to include shape regression and longitudinal analysis and also adds new methods to improve modeling of complex shapes. It also includes a summary of two applications from the field of orthopaedics. Technical details of the PBM framework are provided in Chapter 2, after which the first topic related to the study of shape change over time is addressed (Chapters 3 and 4). In analyses of normative growth or disease progression, shape regression models allow characterization of the underlying biological process while also facilitating comparison of a sample against a normative model. The first paper introduces a shape regression model into the PBM framework to characterize shape variability due to an underlying biological parameter. It further confirms the statistical significance of this relationship via systematic permutation testing. Simple regression models are, however, not sufficient to leverage information provided by longitudinal studies. Longitudinal studies collect data at multiple time points for each participant and have the potential to provide a rich picture of the anatomical changes occurring during development, disease progression, or recovery. The second paper presents a linear-mixed-effects (LME) shape model in order to fully leverage the high-dimensional, complex features provided by longitudinal data. The parameters of the LME shape model are estimated in a hierarchical manner within the PBM framework. The topic of geometric complexity present in certain biological shapes is addressed next (Chapters 5 and 6). Certain biological shapes are inherently complex and highly variable, inhibiting correspondence based methods from producing a faithful representation of the average shape. In the PBM framework, use of Euclidean distances leads to incorrect particle system interactions while a position-only representation leads to incorrect correspondences around sharp features across shapes. The third paper extends the PBM framework to use efficiently computed geodesic distances and also adds an entropy term based on the surface normal. The fourth paper further replaces the position-only representation with a more robust distance-from-landmark feature in the PBM framework to obtain isometry invariant correspondences. Finally, the above methods are applied to two applications from the field of orthopaedics. The first application uses correspondences across an ensemble of human femurs to characterize morphological shape differences due to femoroacetabular impingement. The second application involves an investigation of the short bone phenotype apparent in mouse models of multiple osteochondromas. Metaphyseal volume deviations are correlated with deviations in length to quantify the effect of cancer toward the apparent shortening of long bones (femur, tibia-fibula) in mouse models
    corecore