166 research outputs found

    Personalized reduced 3-lead system formation methodology for Remote Health Monitoring applications and reconstruction of standard 12-lead system

    Get PDF
    Remote Health Monitoring (RHM) applications encounter limitations from technological front viz. bandwidth, storage and transmission time and the medical science front i.e. usage of 2-3 lead systems instead of the standard 12-lead (S12) system. Technological limitations constraint the number of leads to 2-3 while cardiologists accustomed with 12-Lead ECG may find these 2-3 lead systems insufficient for diagnosis. Thus, the aforementioned limitations pose self-contradicting challenges for RHM. A personalized reduced 2/3 lead system is required which can offer equivalent information as contained in S12 system, so as to accurately reconstruct S12 system from reduced lead system for diagnosis. In this paper, we propose a personalized reduced 3-lead (R3L) system formation methodology which employs principal component analysis, thereby, reducing redundancy and increasing SNR ratio, hence, making it suitable for wireless transmission. Accurate S12 system is made available using personalized lead reconstruction methodology, thus addressing medical constraints. Mean R2 statistics values obtained for reconstruction of S12 system from the proposed R3L system using PhysioNet's PTB and TWA databases were 95.63% and 96.37% respectively. To substantiate the superior diagnostic quality of reconstructed leads, root mean square error (RMSE) metrics obtained upon comparing the ECG features extracted from the original and reconstructed leads, using our recently proposed Time Domain Morphology and Gradient (TDMG) algorithm, have been analyzed and discussed. The proposed system does not require any extra electrode or modification in placement positions and hence, can readily find application in computerized ECG machines

    Identification of cardiac signals in ambulatory ECG data

    Get PDF
    The Electrocardiogram (ECG) is the primary tool for monitoring heart function. ECG signals contain vital information about the heart which informs diagnosis and treatment of cardiac conditions. The diagnosis of many cardiac arrhythmias require long term and continuous ECG data, often while the participant engages in activity. Wearable ambulatory ECG (AECG) systems, such as the common Holter system, allow heart monitoring for hours or days. The technological trajectory of AECG systems aims towards continuous monitoring during a wide range of activities with data processed locally in real time and transmitted to a monitoring centre for further analysis. Furthermore, hierarchical decision systems will allow wearable systems to produce alerts or even interventions. These functions could be integrated into smartphones.A fundamental limitation of this technology is the ability to identify heart signal characteristics in ECG signals contaminated with high amplitude and non-stationary noise. Noise processing become more severe as activity levels increase, and this is also when many heart problems are present.This thesis focuses on the identification of heart signals in AECG data recorded during participant activity. In particular, it explored ECG filters to identify major heart conditions in noisy AECG data. Gold standard methods use Extended Kalman filters with extrapolation based on sum of Gaussian models. New methods are developed using linear Kalman filtering and extrapolation based on a sum of Principal Component basis signals. Unlike the gold standard methods, extrapolation is heartcycle by heartcycle. Several variants are explored where basic signals span one or two heartcycles, and applied to single or multi-channel ECG data.The proposed methods are extensively tested against standard databases or normal and abnormal ECG data and the performance is compared to gold standard methods. Two performance metrics are used: improvement in signal to noise ratio and the observability of clinically important features in the heart signal. In all tests the proposed method performs better, and often significantly better, than the gold standard methods. It is demonstrated that abnormal ECG signals can be identified in noisy AECG data

    Boosting the Battery Life of Wearables for Health Monitoring Through the Compression of Biosignals

    Get PDF
    Modern wearable Internet of Things (IoT) devices enable the monitoring of vital parameters such as heart or respiratory (RESP) rates, electrocardiography (ECG), photo-plethysmographic (PPG) signals within e-health applications. A common issue of wearable technology is that signal transmission is power-demanding and, as such, devices require frequent battery charges and this poses serious limitations to the continuous monitoring of vitals. To ameliorate this, we advocate the use of lossy signal compression as a means to decrease the data size of the gathered biosignals and, in turn, boost the battery life of wearables and allow for fine-grained and long-term monitoring. Considering 1-D biosignals such as ECG, RESP, and PPG, which are often available from commercial wearable IoT devices, we provide a thorough review of existing biosignal compression algorithms. Besides, we present novel approaches based on online dictionaries, elucidating their operating principles and providing a quantitative assessment of compression, reconstruction and energy consumption performance of all schemes. As we quantify, the most efficient schemes allow reductions in the signal size of up to 100 times, which entail similar reductions in the energy demand, by still keeping the reconstruction error within 4% of the peak-to-peak signal amplitude. Finally, avenues for future research are discussed. © 2014 IEEE

    A general dual-pathway network for EEG denoising

    Get PDF
    IntroductionScalp electroencephalogram (EEG) analysis and interpretation are crucial for tracking and analyzing brain activity. The collected scalp EEG signals, however, are weak and frequently tainted with various sorts of artifacts. The models based on deep learning provide comparable performance with that of traditional techniques. However, current deep learning networks applied to scalp EEG noise reduction are large in scale and suffer from overfitting.MethodsHere, we propose a dual-pathway autoencoder modeling framework named DPAE for scalp EEG signal denoising and demonstrate the superiority of the model on multi-layer perceptron (MLP), convolutional neural network (CNN) and recurrent neural network (RNN), respectively. We validate the denoising performance on benchmark scalp EEG artifact datasets.ResultsThe experimental results show that our model architecture not only significantly reduces the computational effort but also outperforms existing deep learning denoising algorithms in root relative mean square error (RRMSE)metrics, both in the time and frequency domains.DiscussionThe DPAE architecture does not require a priori knowledge of the noise distribution nor is it limited by the network layer structure, which is a general network model oriented toward blind source separation

    A Comparative Analysis of Methods for Evaluation of ECG Signal Quality after Compression

    Get PDF
    The assessment of ECG signal quality after compression is an essential part of the compression process. Compression facilitates the signal archiving, speeds up signal transmission, and reduces the energy consumption. Conversely, lossy compression distorts the signals. Therefore, it is necessary to express the compression performance through both compression efficiency and signal quality. This paper provides an overview of objective algorithms for the assessment of both ECG signal quality after compression and compression efficiency. In this area, there is a lack of standardization, and there is no extensive review as such. 40 methods were tested in terms of their suitability for quality assessment. For this purpose, the whole CSE database was used. The tested signals were compressed using an algorithm based on SPIHT with varying efficiency. As a reference, compressed signals were manually assessed by two experts and classified into three quality groups. Owing to the experts’ classification, we determined corresponding ranges of selected quality evaluation methods’ values. The suitability of the methods for quality assessment was evaluated based on five criteria. For the assessment of ECG signal quality after compression, we recommend to use a combination of these methods: PSim SDNN, QS, SNR1, MSE, PRDN1, MAX, STDERR, and WEDD SWT

    Extraction and Detection of Fetal Electrocardiograms from Abdominal Recordings

    Get PDF
    The non-invasive fetal ECG (NIFECG), derived from abdominal surface electrodes, offers novel diagnostic possibilities for prenatal medicine. Despite its straightforward applicability, NIFECG signals are usually corrupted by many interfering sources. Most significantly, by the maternal ECG (MECG), whose amplitude usually exceeds that of the fetal ECG (FECG) by multiple times. The presence of additional noise sources (e.g. muscular/uterine noise, electrode motion, etc.) further affects the signal-to-noise ratio (SNR) of the FECG. These interfering sources, which typically show a strong non-stationary behavior, render the FECG extraction and fetal QRS (FQRS) detection demanding signal processing tasks. In this thesis, several of the challenges regarding NIFECG signal analysis were addressed. In order to improve NIFECG extraction, the dynamic model of a Kalman filter approach was extended, thus, providing a more adequate representation of the mixture of FECG, MECG, and noise. In addition, aiming at the FECG signal quality assessment, novel metrics were proposed and evaluated. Further, these quality metrics were applied in improving FQRS detection and fetal heart rate estimation based on an innovative evolutionary algorithm and Kalman filtering signal fusion, respectively. The elaborated methods were characterized in depth using both simulated and clinical data, produced throughout this thesis. To stress-test extraction algorithms under ideal circumstances, a comprehensive benchmark protocol was created and contributed to an extensively improved NIFECG simulation toolbox. The developed toolbox and a large simulated dataset were released under an open-source license, allowing researchers to compare results in a reproducible manner. Furthermore, to validate the developed approaches under more realistic and challenging situations, a clinical trial was performed in collaboration with the University Hospital of Leipzig. Aside from serving as a test set for the developed algorithms, the clinical trial enabled an exploratory research. This enables a better understanding about the pathophysiological variables and measurement setup configurations that lead to changes in the abdominal signal's SNR. With such broad scope, this dissertation addresses many of the current aspects of NIFECG analysis and provides future suggestions to establish NIFECG in clinical settings.:Abstract Acknowledgment Contents List of Figures List of Tables List of Abbreviations List of Symbols (1)Introduction 1.1)Background and Motivation 1.2)Aim of this Work 1.3)Dissertation Outline 1.4)Collaborators and Conflicts of Interest (2)Clinical Background 2.1)Physiology 2.1.1)Changes in the maternal circulatory system 2.1.2)Intrauterine structures and feto-maternal connection 2.1.3)Fetal growth and presentation 2.1.4)Fetal circulatory system 2.1.5)Fetal autonomic nervous system 2.1.6)Fetal heart activity and underlying factors 2.2)Pathology 2.2.1)Premature rupture of membrane 2.2.2)Intrauterine growth restriction 2.2.3)Fetal anemia 2.3)Interpretation of Fetal Heart Activity 2.3.1)Summary of clinical studies on FHR/FHRV 2.3.2)Summary of studies on heart conduction 2.4)Chapter Summary (3)Technical State of the Art 3.1)Prenatal Diagnostic and Measuring Technique 3.1.1)Fetal heart monitoring 3.1.2)Related metrics 3.2)Non-Invasive Fetal ECG Acquisition 3.2.1)Overview 3.2.2)Commercial equipment 3.2.3)Electrode configurations 3.2.4)Available NIFECG databases 3.2.5)Validity and usability of the non-invasive fetal ECG 3.3)Non-Invasive Fetal ECG Extraction Methods 3.3.1)Overview on the non-invasive fetal ECG extraction methods 3.3.2)Kalman filtering basics 3.3.3)Nonlinear Kalman filtering 3.3.4)Extended Kalman filter for FECG estimation 3.4)Fetal QRS Detection 3.4.1)Merging multichannel fetal QRS detections 3.4.2)Detection performance 3.5)Fetal Heart Rate Estimation 3.5.1)Preprocessing the fetal heart rate 3.5.2)Fetal heart rate statistics 3.6)Fetal ECG Morphological Analysis 3.7)Problem Description 3.8)Chapter Summary (4)Novel Approaches for Fetal ECG Analysis 4.1)Preliminary Considerations 4.2)Fetal ECG Extraction by means of Kalman Filtering 4.2.1)Optimized Gaussian approximation 4.2.2)Time-varying covariance matrices 4.2.3)Extended Kalman filter with unknown inputs 4.2.4)Filter calibration 4.3)Accurate Fetal QRS and Heart Rate Detection 4.3.1)Multichannel evolutionary QRS correction 4.3.2)Multichannel fetal heart rate estimation using Kalman filters 4.4)Chapter Summary (5)Data Material 5.1)Simulated Data 5.1.1)The FECG Synthetic Generator (FECGSYN) 5.1.2)The FECG Synthetic Database (FECGSYNDB) 5.2)Clinical Data 5.2.1)Clinical NIFECG recording 5.2.2)Scope and limitations of this study 5.2.3)Data annotation: signal quality and fetal amplitude 5.2.4)Data annotation: fetal QRS annotation 5.3)Chapter Summary (6)Results for Data Analysis 6.1)Simulated Data 6.1.1)Fetal QRS detection 6.1.2)Morphological analysis 6.2)Own Clinical Data 6.2.1)FQRS correction using the evolutionary algorithm 6.2.2)FHR correction by means of Kalman filtering (7)Discussion and Prospective 7.1)Data Availability 7.1.1)New measurement protocol 7.2)Signal Quality 7.3)Extraction Methods 7.4)FQRS and FHR Correction Algorithms (8)Conclusion References (A)Appendix A - Signal Quality Annotation (B)Appendix B - Fetal QRS Annotation (C)Appendix C - Data Recording GU

    Invariant Scattering Transform for Medical Imaging

    Full text link
    Invariant scattering transform introduces new area of research that merges the signal processing with deep learning for computer vision. Nowadays, Deep Learning algorithms are able to solve a variety of problems in medical sector. Medical images are used to detect diseases brain cancer or tumor, Alzheimer's disease, breast cancer, Parkinson's disease and many others. During pandemic back in 2020, machine learning and deep learning has played a critical role to detect COVID-19 which included mutation analysis, prediction, diagnosis and decision making. Medical images like X-ray, MRI known as magnetic resonance imaging, CT scans are used for detecting diseases. There is another method in deep learning for medical imaging which is scattering transform. It builds useful signal representation for image classification. It is a wavelet technique; which is impactful for medical image classification problems. This research article discusses scattering transform as the efficient system for medical image analysis where it's figured by scattering the signal information implemented in a deep convolutional network. A step by step case study is manifested at this research work.Comment: 11 pages, 8 figures and 1 tabl

    Algorithms for Compression of Electrocardiogram Signals

    Get PDF
    The study is dedicated to modern methods and algorithms for compression of electrocardiogram (ECG) signals. In its original part, two lossy compression algorithms based on a combination of linear transforms are proposed. These algorithms are with relatively low computational complexity, making them applicable for implementation in low power designs such as mobile devices or embedded systems. Since the algorithms do not provide perfect signal reconstruction, they would find application in ECG monitoring systems rather than those intended for precision medical diagnosis. This monograph consists of abstract, preface, five chapters and conclusion. The chapters are as follows: Chapter 1 — Introduction to ECG; Chapter 2 — Overview of the existing methods and algorithms for ECG compression; Chapter 3 — ECG compression algorithm, based on a combination of linear transforms; Chapter 4 — Improvement of the developed algorithm for ECG compression; Chapter 5 — Experimental investigations. Този труд е посветен на съвременните методи и алгоритми за компресия на електрокардиографски (ЕКГ) сигнали. В оригиналната му част са предложени два алгоритъма за компресия със загуби, които са базирани на комбинация от линейни преобразувания. Тези алгоритми се характеризират със сравнително невисока изчислителна сложност, което дава възможност да бъдат реализирани в устройства с ниска консумация на енергия, като например мобилни устройства или вградени системи. Тъй като алгоритмите не позволяват перфектно възстановяване на сигнала, те биха намерили приложение по-скоро в системите за ЕКГ мониторинг, отколкото в тези, предназначени за прецизна медицинска диагностика. Монографията съдържа резюме, предговор, пет глави и заключение. Главите са както следва: Глава 1 — Въведение в електрокардиографията; Глава 2 — Обзор на съществуващите методи и алгоритми за компресия на ЕКГ сигнали; Глава 3 — Алгоритъм за компресия на ЕКГ сигнали, базиран на комбинация от линейни преобразувания; Глава 4 — Усъвършенстване на разработения алгоритъм за компресия на ЕКГ сигнали; Глава 5 — Експериментални изследвания

    Prepoznavanje građevina pogođenih potresom temeljem korelacijske detekcije promjena obilježja teksture na SAR snimkama

    Get PDF
    The detection of building damage due to earthquakes is crucial for disaster management and disaster relief activities. Change detection methodologies using satellite images, such as synthetic aperture radar (SAR) data, have being applied in earthquake damage detection. Information contained within SAR data relating to earthquake damage of buildings can be disturbed easily by other factors. This paper presents a multitemporal change detection approach intended to identify and evaluate information pertaining to earthquake damage by fully exploiting the abundant texture features of SAR imagery. The approach is based on two images, which are constructed through principal components of multiple texture features. An independent principal components analysis technique is used to extract multiple texture feature components. Then, correlation analysis is performed to detect the distribution information of earthquake-damaged buildings. The performance of the technique was evaluated in the town of Jiegu (affected by the 2010 Yushu earthquake) and in the Kathmandu Valley (struck by the 2015 Nepal earthquake) for which the overall accuracy of building detection was 87.8% and 84.6%, respectively. Cross-validation results showed the proposed approach is more sensitive than existing methods to the detection of damaged buildings. Overall, the method is an effective damage detection approach that could support post-earthquake management activities in future events.Detekcija oštećenja građevina uzrokovanih potresom od presudne je važnosti za upravljanje rizicima od katastrofa i aktivnostima prilikom elementarnih nepogoda. Metodologije detekcije promjena, koristeći satelitske snimke kao što su podaci radara sa sintetičkim otvorom antene (SAR), korištene su u detekciji oštećenja od potresa. Informacije sadržane unutar SAR podataka, koje se odnose na oštećenja građevina uzrokovana potresom, mogu lako sadržavati šumove zbog drugih faktora. Ovaj rad prikazuje viševremenski pristup detekciji promjena kako bi se identificirale i procijenile informacije koje se odnose na oštećenja od potresa koristeći u potpunosti značajke teksture SAR snimaka. Pristup se temelji na dvije snimke koje su izrađene kroz glavne komponente višestrukih osobina tekstura. Neovisna analiza glavnih komponenti koristi se kako bi se izdvojile komponente višestrukih tekstura. Nakon toga provodi se korelacijska analiza kako bi se detektirale informacije o distribuciji građevina oštećenih potresom. Učinkovitost ove tehnike ispitana je u gradu Jiegu (kojega je 2010. godine pogodio potres Yushu) te u dolini Kathmandu (koju je 2015. godine pogodio potres Nepal), u kojoj je ukupna točnost detektiranja građevina bila 87,8%, odnosno 84,6%. Rezultati međusobne provjere valjanosti pokazali su da je predloženi pristup osjetljiviji od postojećih metoda za detektiranje oštećenih građevina. Općenito govoreći, metoda je učinkovit pristup detektiranja oštećenja koji može u budućnosti pružati potporu u aktivnostima upravljanja nakon potresa
    corecore