1,221 research outputs found

    Fast T wave detection calibrated by clinical knowledge with annotation of P and T waves

    Get PDF
    There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies.Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry).The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats).We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.Mohamed Elgendi, Bjoern Eskofier and Derek Abbot

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    The hidden waves in the ECG uncovered: a sound automated interpretation method

    Full text link
    A novel approach for analysing cardiac rhythm data is presented in this paper. Heartbeats are decomposed into the five fundamental PP, QQ, RR, SS and TT waves plus an error term to account for artefacts in the data which provides a meaningful, physical interpretation of the heart's electric system. The morphology of each wave is concisely described using four parameters that allow to all the different patterns in heartbeats be characterized and thus differentiated This multi-purpose approach solves such questions as the extraction of interpretable features, the detection of the fiducial marks of the fundamental waves, or the generation of synthetic data and the denoising of signals. Yet, the greatest benefit from this new discovery will be the automatic diagnosis of heart anomalies as well as other clinical uses with great advantages compared to the rigid, vulnerable and black box machine learning procedures, widely used in medical devices. The paper shows the enormous potential of the method in practice; specifically, the capability to discriminate subjects, characterize morphologies and detect the fiducial marks (reference points) are validated numerically using simulated and real data, thus proving that it outperforms its competitors

    Directed networks as a novel way to describe and analyze cardiac excitation : directed graph mapping

    Get PDF
    Networks provide a powerful methodology with applications in a variety of biological, technological and social systems such as analysis of brain data, social networks, internet search engine algorithms, etc. To date, directed networks have not yet been applied to characterize the excitation of the human heart. In clinical practice, cardiac excitation is recorded by multiple discrete electrodes. During (normal) sinus rhythm or during cardiac arrhythmias, successive excitation connects neighboring electrodes, resulting in their own unique directed network. This in theory makes it a perfect fit for directed network analysis. In this study, we applied directed networks to the heart in order to describe and characterize cardiac arrhythmias. Proof-of-principle was established using in-silico and clinical data. We demonstrated that tools used in network theory analysis allow determination of the mechanism and location of certain cardiac arrhythmias. We show that the robustness of this approach can potentially exceed the existing state-of-the art methodology used in clinics. Furthermore, implementation of these techniques in daily practice can improve the accuracy and speed of cardiac arrhythmia analysis. It may also provide novel insights in arrhythmias that are still incompletely understood

    Precision medicine and artificial intelligence : a pilot study on deep learning for hypoglycemic events detection based on ECG

    Get PDF
    Tracking the fluctuations in blood glucose levels is important for healthy subjects and crucial diabetic patients. Tight glucose monitoring reduces the risk of hypoglycemia, which can result in a series of complications, especially in diabetic patients, such as confusion, irritability, seizure and can even be fatal in specific conditions. Hypoglycemia affects the electrophysiology of the heart. However, due to strong inter-subject heterogeneity, previous studies based on a cohort of subjects failed to deploy electrocardiogram (ECG)-based hypoglycemic detection systems reliably. The current study used personalised medicine approach and Artificial Intelligence (AI) to automatically detect nocturnal hypoglycemia using a few heartbeats of raw ECG signal recorded with non-invasive, wearable devices, in healthy individuals, monitored 24 hours for 14 consecutive days. Additionally, we present a visualisation method enabling clinicians to visualise which part of the ECG signal (e.g., T-wave, ST-interval) is significantly associated with the hypoglycemic event in each subject, overcoming the intelligibility problem of deep-learning methods. These results advance the feasibility of a real-time, non-invasive hypoglycemia alarming system using short excerpts of ECG signal

    Hemodynamic monitor for rapid, cost-effective assessment of peripheral vascular function

    Get PDF
    Worldwide, at least 200 million people are affected by peripheral vascular diseases (PVDs), including peripheral arterial disease (PAD), chronic venous insufficiency (CVI) and deep vein thrombosis (DVT). These diseases have considerable socioeconomic impacts due to their high prevalence, cost of investigation, treatment and their effects on quality of life. PVDs are often undiagnosed with up to 60% of patients with PVD remaining asymptomatic. Early diagnosis is essential for effective treatment and reducing socioeconomic costs, particularly in patients with diabetes where early endovascular treatment can prevent lower extremity amputation. However, available diagnostic methods simply do not meet the needs of clinicians. For example, duplex ultrasound or plethysmography are time-consuming methods, costly and require access to highly trained clinicians. Due to the cost and time requirements of such methods, they are often reserved for symptomatic patients. On the other hand, the Ankle Brachial Index (ABI) test is cheap but has poor sensitivity for those patients with diabetes and the elderly, both growing high-risk populations. There is an urgent need for new diagnostic tools to enable earlier intervention. Researchers at the MARCS Institute have developed a novel hemodynamic monitor platform named HeMo, specifically for the assessment of peripheral blood flow in the leg. This development aimed to provide a fast and low-cost diagnosis of both peripheral arterial disease and chronic venous insufficiency. This work first provides a comprehensive literature review of the existing non-invasive diagnostic devices developed since 1677 to highlight the need of development of a new blood monitoring tool. Second, it presents the simplified circuit of the HeMo device and provides series of pilot experiments with HeMo demonstrating its potential for diagnosis of both peripheral arterial disease and chronic venous insufficiency. Third, it presents a quantitative characterisation of the electrical behaviour of the electro-resistive band sensors with the development of an expansion/contraction simulator rig and using spectral analysis. The characterisation of the electro-resistive band was essential to understand the nonlinear electrical behaviour of such sensors and would be of interest for other users and uses of the electro-resistive band sensors. However, in another perspective this sinusoidal linear stretching movement and the presented method shows an example for the application of the presented rig, highlighting that the same technique could be used for characterisation of similar stretchable sensors. Fourth, it shows data from a healthy population, assessing the performance of HeMo compared to light reflection rheography (LRR sensor-VasoScreen 5000) for the assessment of venous function. Fifth, it presents human study data where the performance of HeMo is compared to photoplethysmography (PPG sensor-VasoScreen 5000) for the evaluation of the arterial function. Overall, the presented work here, steps toward development of the final version of a novel hemodynamic monitoring device, and its validation

    Precision medicine and artificial intelligence : a pilot study on deep learning for hypoglycemic events detection based on ECG

    Get PDF
    Tracking the fluctuations in blood glucose levels is important for healthy subjects and crucial diabetic patients. Tight glucose monitoring reduces the risk of hypoglycemia, which can result in a series of complications, especially in diabetic patients, such as confusion, irritability, seizure and can even be fatal in specific conditions. Hypoglycemia affects the electrophysiology of the heart. However, due to strong inter-subject heterogeneity, previous studies based on a cohort of subjects failed to deploy electrocardiogram (ECG)-based hypoglycemic detection systems reliably. The current study used personalised medicine approach and Artificial Intelligence (AI) to automatically detect nocturnal hypoglycemia using a few heartbeats of raw ECG signal recorded with non-invasive, wearable devices, in healthy individuals, monitored 24 hours for 14 consecutive days. Additionally, we present a visualisation method enabling clinicians to visualise which part of the ECG signal (e.g., T-wave, ST-interval) is significantly associated with the hypoglycemic event in each subject, overcoming the intelligibility problem of deep-learning methods. These results advance the feasibility of a real-time, non-invasive hypoglycemia alarming system using short excerpts of ECG signal

    Multiscale Cohort Modeling of Atrial Electrophysiology : Risk Stratification for Atrial Fibrillation through Machine Learning on Electrocardiograms

    Get PDF
    Patienten mit Vorhofflimmern sind einem fünffach erhöhten Risiko für einen ischämischen Schlaganfall ausgesetzt. Eine frühzeitige Erkennung und Diagnose der Arrhythmie würde ein rechtzeitiges Eingreifen ermöglichen, um möglicherweise auftretende Begleiterkrankungen zu verhindern. Eine Vergrößerung des linken Vorhofs sowie fibrotisches Vorhofgewebe sind Risikomarker für Vorhofflimmern, da sie die notwendigen Voraussetzungen für die Aufrechterhaltung der chaotischen elektrischen Depolarisation im Vorhof erfüllen. Mithilfe von Techniken des maschinellen Lernens könnten Fibrose und eine Vergrößerung des linken Vorhofs basierend auf P Wellen des 12-Kanal Elektrokardiogramms im Sinusrhythmus automatisiert identifiziert werden. Dies könnte die Basis für eine nicht-invasive Risikostrat- ifizierung neu auftretender Vorhofflimmerepisoden bilden, um anfällige Patienten für ein präventives Screening auszuwählen. Zu diesem Zweck wurde untersucht, ob simulierte Vorhof-Elektrokardiogrammdaten, die dem klinischen Trainingssatz eines maschinellen Lernmodells hinzugefügt wurden, zu einer verbesserten Klassifizierung der oben genannten Krankheiten bei klinischen Daten beitra- gen könnten. Zwei virtuelle Kohorten, die durch anatomische und funktionelle Variabilität gekennzeichnet sind, wurden generiert und dienten als Grundlage für die Simulation großer P Wellen-Datensätze mit genau bestimmbaren Annotationen der zugrunde liegenden Patholo- gie. Auf diese Weise erfüllen die simulierten Daten die notwendigen Voraussetzungen für die Entwicklung eines Algorithmus für maschinelles Lernen, was sie von klinischen Daten unterscheidet, die normalerweise nicht in großer Zahl und in gleichmäßig verteilten Klassen vorliegen und deren Annotationen möglicherweise durch unzureichende Expertenannotierung beeinträchtigt sind. Für die Schätzung des Volumenanteils von linksatrialem fibrotischen Gewebe wurde ein merkmalsbasiertes neuronales Netz entwickelt. Im Vergleich zum Training des Modells mit nur klinischen Daten, führte das Training mit einem hybriden Datensatz zu einer Reduzierung des Fehlers von durchschnittlich 17,5 % fibrotischem Volumen auf 16,5 %, ausgewertet auf einem rein klinischen Testsatz. Ein Long Short-Term Memory Netzwerk, das für die Unterscheidung zwischen gesunden und P Wellen von vergrößerten linken Vorhöfen entwickelt wurde, lieferte eine Genauigkeit von 0,95 wenn es auf einem hybriden Datensatz trainiert wurde, von 0,91 wenn es nur auf klinischen Daten trainiert wurde, die alle mit 100 % Sicherheit annotiert wurden, und von 0,83 wenn es auf einem klinischen Datensatz trainiert wurde, der alle Signale unabhängig von der Sicherheit der Expertenannotation enthielt. In Anbetracht der Ergebnisse dieser Arbeit können Elektrokardiogrammdaten, die aus elektrophysiologischer Modellierung und Simulationen an virtuellen Patientenkohorten resul- tieren und relevante Variabilitätsaspekte abdecken, die mit realen Beobachtungen übereinstim- men, eine wertvolle Datenquelle zur Verbesserung der automatisierten Risikostratifizierung von Vorhofflimmern sein. Auf diese Weise kann den Nachteilen klinischer Datensätze für die Entwicklung von Modellen des maschinellen Lernens entgegengewirkt werden. Dies trägt letztendlich zu einer frühzeitigen Erkennung der Arrhythmie bei, was eine rechtzeitige Auswahl geeigneter Behandlungsstrategien ermöglicht und somit das Schlaganfallrisiko der betroffenen Patienten verringert

    C-Trend parameters and possibilities of federated learning

    Get PDF
    Abstract. In this observational study, federated learning, a cutting-edge approach to machine learning, was applied to one of the parameters provided by C-Trend Technology developed by Cerenion Oy. The aim was to compare the performance of federated learning to that of conventional machine learning. Additionally, the potential of federated learning for resolving the privacy concerns that prevent machine learning from realizing its full potential in the medical field was explored. Federated learning was applied to burst-suppression ratio’s machine learning and it was compared to the conventional machine learning of burst-suppression ratio calculated on the same dataset. A suitable aggregation method was developed and used in the updating of the global model. The performance metrics were compared and a descriptive analysis including box plots and histograms was conducted. As anticipated, towards the end of the training, federated learning’s performance was able to approach that of conventional machine learning. The strategy can be regarded to be valid because the performance metric values remained below the set test criterion levels. With this strategy, we will potentially be able to make use of data that would normally be kept confidential and, as we gain access to more data, eventually develop machine learning models that perform better. Federated learning has some great advantages and utilizing it in the context of qEEGs’ machine learning could potentially lead to models, which reach better performance by receiving data from multiple institutions without the difficulties of privacy restrictions. Some possible future directions include an implementation on heterogeneous data and on larger data volume.C-Trend-teknologian parametrit ja federoidun oppimisen mahdollisuudet. Tiivistelmä. Tässä havainnointitutkimuksessa federoitua oppimista, koneoppimisen huippuluokan lähestymistapaa, sovellettiin yhteen Cerenion Oy:n kehittämään C-Trend-teknologian tarjoamaan parametriin. Tavoitteena oli verrata federoidun oppimisen suorituskykyä perinteisen koneoppimisen suorituskykyyn. Lisäksi tutkittiin federoidun oppimisen mahdollisuuksia ratkaista yksityisyyden suojaan liittyviä rajoitteita, jotka estävät koneoppimista hyödyntämästä täyttä potentiaaliaan lääketieteen alalla. Federoitua oppimista sovellettiin purskevaimentumasuhteen koneoppimiseen ja sitä verrattiin purskevaimentumasuhteen laskemiseen, johon käytettiin perinteistä koneoppimista. Kummankin laskentaan käytettiin samaa dataa. Sopiva aggregointimenetelmä kehitettiin, jota käytettiin globaalin mallin päivittämisessä. Suorituskykymittareiden tuloksia verrattiin keskenään ja tehtiin kuvaileva analyysi, johon sisältyi laatikkokuvioita ja histogrammeja. Odotetusti opetuksen loppupuolella federoidun oppimisen suorituskyky pystyi lähestymään perinteisen koneoppimisen suorituskykyä. Menetelmää voidaan pitää pätevänä, koska suorituskykymittarin arvot pysyivät alle asetettujen testikriteerien tasojen. Tämän menetelmän avulla voimme ehkä hyödyntää dataa, joka normaalisti pidettäisiin salassa, ja kun saamme lisää dataa käyttöömme, voimme lopulta kehittää koneoppimismalleja, jotka saavuttavat paremman suorituskyvyn. Federoidulla oppimisella on joitakin suuria etuja, ja sen hyödyntäminen qEEG:n koneoppimisen yhteydessä voisi mahdollisesti johtaa malleihin, jotka saavuttavat paremman suorituskyvyn saamalla tietoja useista eri lähteistä ilman yksityisyyden suojaan liittyviä rajoituksia. Joitakin mahdollisia tulevia suuntauksia ovat muun muassa heterogeenisen datan ja suurempien tietomäärien käyttö
    corecore