14 research outputs found

    Computer versus cardiologist: Is a machine learning algorithm able to outperform an expert in diagnosing a phospholamban p.Arg14del mutation on the electrocardiogram?

    Get PDF
    Background Phospholamban (PLN) p.Arg14del mutation carriers are known to develop dilated and/or arrhythmogenic cardiomyopathy, and typical electrocardiographic (ECG) features have been identified for diagnosis. Machine learning is a powerful tool used in ECG analysis and has shown to outperform cardiologists. Objectives We aimed to develop machine learning and deep learning models to diagnose PLN p.Arg14del cardiomyopathy using ECGs and evaluate their accuracy compared to an expert cardiologist. Methods We included 155 adult PLN mutation carriers and 155 age- and sex-matched control subjects. Twenty-one PLN mutation carriers (13.4%) were classified as symptomatic (symptoms of heart failure or malignant ventricular arrhythmias). The data set was split into training and testing sets using 4-fold cross-validation. Multiple models were developed to discriminate between PLN mutation carriers and control subjects. For comparison, expert cardiologists classified the same data set. The best performing models were validated using an external PLN p.Arg14del mutation carrier data set from Murcia, Spain (n = 50). We applied occlusion maps to visualize the most contributing ECG regions. Results In terms of specificity, expert cardiologists (0.99) outperformed all models (range 0.53–0.81). In terms of accuracy and sensitivity, experts (0.28 and 0.64) were outperformed by all models (sensitivity range 0.65–0.81). T-wave morphology was most important for classification of PLN p.Arg14del carriers. External validation showed comparable results, with the best model outperforming experts. Conclusion This study shows that machine learning can outperform experienced cardiologists in the diagnosis of PLN p.Arg14del cardiomyopathy and suggests that the shape of the T wave is of added importance to this diagnosis

    Anomaly Detection from Low-dimensional Latent Manifolds with Home Environmental Sensors

    Get PDF
    Human Activity Recognition poses a significant challenge within Active and Assisted Living (AAL) systems, relying extensively on ubiquitous environmental sensor-based acquisition devices to detect user situations in their daily living. Environmental measurement systems deployed indoors yield multiparametric data in heterogeneous formats, which presents a challenge for developing Machine Learning-based AAL models. We hypothesized that anomaly detection algorithms could be effectively employed to create data-driven models for monitoring home environments and that the complex multiparametric indoor measurements can often be represented by a relatively small number of latent variables generated through Manifold Learning (MnL) techniques. We examined both linear (Principal Component Analysis) and non-linear (AutoEncoders) techniques for generating these latent spaces and the utility of core domain detection techniques for identifying anomalies within the resulting low-dimensional manifolds. We benchmarked this approach using three publicly available datasets (hh105, Aruba, and Tulum) and one proprietary dataset (Elioth) for home environmental monitoring. Our results demonstrated the following key findings: (a) Nonlinear manifold estimation techniques offer significant advantages in retrieving latent variables when compared to linear techniques; (b) The quality of the reconstruction of the original multidimensional recordings serves as an acceptable indicator of the quality of the generated latent spaces; (c) Domain detection identifies regions of normality consistent with typical individual activities in these spaces; And (d) the system effectively detects deviations from typical activity patterns and labels anomalies. This study lays the groundwork for further exploration of enhanced methods for extracting information from MnL data models and their application within the AAL and possibly other sectors

    Generalization and Regularization for Inverse Cardiac Estimators

    Get PDF
    Electrocardiographic Imaging (ECGI) aims to estimate the intracardiac potentials noninvasively, hence allowing the clinicians to better visualize and understand many arrhythmia mechanisms. Most of the estimators of epicardial potentials use a signal model based on an estimated spatial transfer matrix together with Tikhonov regularization techniques, which works well specially in simulations, but it can give limited accuracy in some real data. Based on the quasielectrostatic potential superposition principle, we propose a simple signal model that supports the implementation of principled out-of-sample algorithms for several of the most widely used regularization criteria in ECGI problems, hence improving the generalization capabilities of several of the current estimation methods. Experiments on simple cases (cylindrical and Gaussian shapes scrutinizing fast and slow changes, respectively) and on real data (examples of torso tank measurements available from Utah University, and an animal torso and epicardium measurements available from Maastricht University, both in the EDGAR public repository) show that the superposition-based out-of-sample tuning of regularization parameters promotes stabilized estimation errors of the unknown source potentials, while slightly increasing the re-estimation error on the measured data, as natural in non-overfitted solutions. The superposition signal model can be used for designing adequate out-of-sample tuning of Tikhonov regularization techniques, and it can be taken into account when using other regularization techniques in current commercial systems and research toolboxes on ECG

    Manifold analysis of the P-wave changes induced by pulmonary vein isolation during cryoballoon procedure

    Get PDF
    Background/Aim: In atrial fibrillation (AF) ablation procedures, it is desirable to know whether a proper disconnection of the pulmonary veins (PVs) was achieved. We hypothesize that information about their isolation could be provided by analyzing changes in P-wave after ablation. Thus, we present a method to detect PV disconnection using P-wave signal analysis. Methods: Conventional P-wave feature extraction was compared to an automatic feature extraction procedure based on creating low-dimensional latent spaces for cardiac signals with the Uniform Manifold Approximation and Projection (UMAP) method. A database of patients (19 controls and 16 AF individuals who underwent a PV ablation procedure) was collected. Standard 12-lead ECG was recorded, and P-waves were segmented and averaged to extract conventional features (duration, amplitude, and area) and their manifold representations provided by UMAP on a 3-dimensional latent space. A virtual patient was used to validate these results further and study the spatial distribution of the extracted characteristics over the whole torso surface. Results: Both methods showed differences between P-wave before and after ablation. Conventional methods were more prone to noise, P-wave delineation errors, and inter-patient variability. P-wave differences were observed in the standard leads recordings. However, higher differences appeared in the torso region over the precordial leads. Recordings near the left scapula also yielded noticeable differences. Conclusions: P-wave analysis based on UMAP parameters detects PV disconnection after ablation in AF patients and is more robust than heuristic parameterization. Moreover, additional leads different from the standard 12-lead ECG should be used to detect PV isolation and possible future reconnections better

    Symbolic Recurrence Analysis of RR Interval to Detect Atrial Fibrillation

    Get PDF
    Atrial fibrillation (AF) is a sustained cardiac arrhythmia associated with stroke, heart failure, and related health conditions. Though easily diagnosed upon presentation in a clinical setting, the transient and/or intermittent emergence of AF episodes present diagnostic and clinical monitoring challenges that would ideally be met with automated ambulatory monitoring and detection. Current approaches to address these needs, commonly available both in smartphone applications and dedicated technologies, combine electrocardiogram (ECG) sensors with predictive algorithms to detect AF. These methods typically require extensive preprocessing, preliminary signal analysis, and the integration of a wide and complex array of features for the detection of AF events, and are consequently vulnerable to over-fitting. In this paper, we introduce the application of symbolic recurrence quantification analysis (SRQA) for the study of ECG signals and detection of AF events, which requires minimal pre-processing and allows the construction of highly accurate predictive algorithms from relatively few features. In addition, this approach is robust against commonly-encountered signal processing challenges that are expected in ambulatory monitoring contexts, including noisy and non-stationary data. We demonstrate the application of this method to yield a highly accurate predictive algorithm, which at optimal threshold values is 97.9% sensitive, 97.6% specific, and 97.7% accurate in classifying AF signals. To confirm the robust generalizability of this approach, we further evaluated its performance in the implementation of a 10-fold cross-validation paradigm, yielding 97.4% accuracy. In sum, these findings emphasize the robust utility of SRQA for the analysis of ECG signals and detection of AF. To the best of our knowledge, the proposed model is the first to incorporate symbolic analysis for AF beat detection.This research was funded by projects AIM, ref. TEC2016-76465-C2-1-R (AEI/FEDER, UE), e-DIVITA, ref.20509/PDC/18 (Proof of Concept, 2018) and it is the result of the activity performed under the program Groups of Excellence of the Region of Murcia (Spain), the Fundación Séneca, Science and Technology Agency of the region of Murcia project under grant 19884/GERM/15 and ATENTO, ref. 20889/PI/18. All remaining errors are our responsibility

    Changes in f-wave characteristics during cryoballoon catheter ablation

    No full text
    OBJECTIVE: Changes in ECG-derived parameters are studied in atrial fibrillation (AF) patients undergoing cryoballoon catheter ablation.APPROACH: Parameters characterizing f-wave frequency, morphology by phase dispersion, and amplitude are estimated using a model-based statistical approach. These parameters are studied before, during, and after ablation, as well as for AF type (paroxysmal/persistent). Seventy-seven (49/28 paroxysmal/persistent) AF patients undergoing de novo catheter ablation are included in the study, out of which 31 (16/15 paroxysmal/persistent) were in AF during the whole procedure. A signal quality index (SQI) is used to identify analyzable segments.MAIN RESULTS: f-wave frequency decreased significantly during ablation (p = 0.001), in particular after ablation of the inferior right pulmonary vein (p < 0.05). Frequency and phase dispersion differed significantly between paroxysmal and persistent AF (p = 0.001 and p < 0.05, respectively).SIGNIFICANCE: This study demonstrates that a decrease in f-wave frequency can be distinguished during catheter ablation. The use of an SQI ensures reliable analysis and produces results significantly different from those obtained without an SQI

    Noise Maps for Quantitative and Clinical Severity Towards Long-Term ECG Monitoring

    No full text
    Noise and artifacts are inherent contaminating components and are particularly present in Holter electrocardiogram (ECG) monitoring. The presence of noise is even more significant in long-term monitoring (LTM) recordings, as these are collected for several days in patients following their daily activities; hence, strong artifact components can temporarily impair the clinical measurements from the LTM recordings. Traditionally, the noise presence has been dealt with as a problem of non-desirable component removal by means of several quantitative signal metrics such as the signal-to-noise ratio (SNR), but current systems do not provide any information about the true impact of noise on the ECG clinical evaluation. As a first step towards an alternative to classical approaches, this work assesses the ECG quality under the assumption that an ECG has good quality when it is clinically interpretable. Therefore, our hypotheses are that it is possible (a) to create a clinical severity score for the effect of the noise on the ECG, (b) to characterize its consistency in terms of its temporal and statistical distribution, and (c) to use it for signal quality evaluation in LTM scenarios. For this purpose, a database of external event recorder (EER) signals is assembled and labeled from a clinical point of view for its use as the gold standard of noise severity categorization. These devices are assumed to capture those signal segments more prone to be corrupted with noise during long-term periods. Then, the ECG noise is characterized through the comparison of these clinical severity criteria with conventional quantitative metrics taken from traditional noise-removal approaches, and noise maps are proposed as a novel representation tool to achieve this comparison. Our results showed that neither of the benchmarked quantitative noise measurement criteria represent an accurate enough estimation of the clinical severity of the noise. A case study of long-term ECG is reported, showing the statistical and temporal correspondences and properties with respect to EER signals used to create the gold standard for clinical noise. The proposed noise maps, together with the statistical consistency of the characterization of the noise clinical severity, paves the way towards forthcoming systems providing us with noise maps of the noise clinical severity, allowing the user to process different ECG segments with different techniques and in terms of different measured clinical parameters

    On the Beat Detection Performance in Long-Term ECG Monitoring Scenarios

    Get PDF
    Despite the wide literature on R-wave detection algorithms for ECG Holter recordings, the long-term monitoring applications are bringing new requirements, and it is not clear that the existing methods can be straightforwardly used in those scenarios. Our aim in this work was twofold: First, we scrutinized the scope and limitations of existing methods for Holter monitoring when moving to long-term monitoring; Second, we proposed and benchmarked a beat detection method with adequate accuracy and usefulness in long-term scenarios. A longitudinal study was made with the most widely used waveform analysis algorithms, which allowed us to tune the free parameters of the required blocks, and a transversal study analyzed how these parameters change when moving to different databases. With all the above, the extension to long-term monitoring in a database of 7-day Holter monitoring was proposed and analyzed, by using an optimized simultaneous-multilead processing. We considered both own and public databases. In this new scenario, the noise-avoid mechanisms are more important due to the amount of noise that exists in these recordings, moreover, the computational efficiency is a key parameter in order to export the algorithm to the clinical practice. The method based on a Polling function outperformed the others in terms of accuracy and computational efficiency, yielding 99.48% sensitivity, 99.54% specificity, 99.69% positive predictive value, 99.46% accuracy, and 0.85% error for MIT-BIH arrhythmia database. We conclude that the method can be used in long-term Holter monitoring systems

    Electrocardiographic fragmented activity (II): a machine learning approach to detection

    Get PDF
    Hypertrophic cardiomyopathy, according to its prevalence, is a comparatively common disease related to the risk of suffering sudden cardiac death, heart failure and stroke. This illness is characterized by the excessive deposition of collagen among healthy myocardium cells. This situation, which is medically known as fibrosis, constitutes effective conduction obstacles in the myocardium electrical path, and when severe enough, it can be outlined as additional peaks or notches in the QRS, clinically entitled as fragmentation. Nowadays, the fragmentation detection is performed by visual inspection, but the fragmented QRS can be confused with the noise present in the electrocardiogram (ECG). On the other hand, fibrosis detection is performed by magnetic resonance imaging with late gadolinium enhancement, the main drawback of this technique being its cost in terms of time and money. In this work, we propose two automatic algorithms, one for fragmented QRS detection and another for fibrosis detection. For this purpose, we used four different databases, including the subrogated database described in the companion paper and incorporating three additional ones, one compounded by more accurate subrogated ECG signals and two compounded by real and affected subjects as labeled by expert clinicians. The first real-world database contains QRS fragmented records and the second one contains records with fibrosis and both were recorded in Hospital Clínico Universitario Virgen de la Arrixaca (Spain). To deeply analyze the scope of these datasets, we benchmarked several classifiers such as Neural Networks, Support Vector Machines (SVM), Decision Trees and Gaussian Naïve Bayes (NB). For the fragmentation dataset, the best results were 0.94 sensitivity, 0.88 specificity, 0.89 positive predictive value, 0.93 negative predictive value and 0.91 accuracy when using SVM with Gaussian kernel. For the fibrosis databases, more limited accuracy was reached, with 0.47 sensitivity, 0.91 specificity, 0.82 predictive positive value, 0.66 negative predictive value and 0.70 accuracy when using Gaussian NB. Nevertheless, this is the first time that fibrosis detection is attempted automatically from ECG postprocessing, paving the way towards improved algorithms and methods for it. Therefore, we can conclude that the proposed techniques could offer a valuable tool to clinicians for both fragmentation and fibrosis diagnoses support
    corecore