436 research outputs found

    QRS Complex Detection based on Multilevel Thresholding and Peak-to-Peak Interval Statistics

    Get PDF
    Heart beats are important aspects of the study of heart diseases in medical science as they provide vital information on heart disorders and diseases or abnormalities in the heart rhythm. Each heart beat provides a QRS complex in the electrocardiogram (ECG) which is centered at the R-peak. The analysis of ECG is hindered by low-frequency noises, high-frequency noise, interference from P and T waves, and change in QRS morphology. Therefore, it is a major challenge to detect the QRS complexes using automatic detection algorithms.This thesis aims to present three new peak detection algorithms based on a statistical analysis of the ECG signal. In the first algorithm, a novel method of segmentation and statistical false peak elimination is proposed. The second algorithm uses different levels of adaptive thresholds to detect true peaks while the third algorithm combines and modifies the two proposed algorithms to provide better efficiency and accuracy in QRS complex detection. The proposed algorithms are tested on the MIT-BIH arrhythmia and provides better detection accuracy in comparison to several state-of-the-art methods in the field. To evaluate the performance of the proposed method, the merits of evaluation consider the number of false positives and negatives. A false positive (FP) is the result of a noise peak being detected and a false negative (FN) occurs when a beat is not detected at all. The methods emphasize better detection algorithms that detect peaks efficiently and automatically without eliminating the high-frequency noise completely and hence reduces the overall computational time

    Detection and Processing Techniques of FECG Signal for Fetal Monitoring

    Get PDF
    Fetal electrocardiogram (FECG) signal contains potentially precise information that could assist clinicians in making more appropriate and timely decisions during labor. The ultimate reason for the interest in FECG signal analysis is in clinical diagnosis and biomedical applications. The extraction and detection of the FECG signal from composite abdominal signals with powerful and advance methodologies are becoming very important requirements in fetal monitoring. The purpose of this review paper is to illustrate the various methodologies and developed algorithms on FECG signal detection and analysis to provide efficient and effective ways of understanding the FECG signal and its nature for fetal monitoring. A comparative study has been carried out to show the performance and accuracy of various methods of FECG signal analysis for fetal monitoring. Finally, this paper further focused some of the hardware implementations using electrical signals for monitoring the fetal heart rate. This paper opens up a passage for researchers, physicians, and end users to advocate an excellent understanding of FECG signal and its analysis procedures for fetal heart rate monitoring system

    Detection Of The R-wave In Ecg Signals

    Get PDF
    This thesis aims at providing a new approach for detecting R-waves in the ECG signal and generating the corresponding R-wave impulses with the delay between the original R-waves and the R-wave impulses being lesser than 100 ms. The algorithm was implemented in Matlab and tested with good results against 90 different ECG recordings from the MIT-BIH database. The Discrete Wavelet Transform (DWT) forms the heart of the algorithm providing a multi-resolution analysis of the ECG signal. The wavelet transform decomposes the ECG signal into frequency scales where the ECG characteristic waveforms are indicated by zero crossings. The adaptive threshold algorithms discussed in this thesis search for valid zero crossings which characterize the R-waves and also remove the Preventricular Contractions (PVC\u27s). The adaptive threshold algorithms allow the decision thresholds to adjust for signal quality changes and eliminate the need for manual adjustments when changing from patient to patient. The delay between the R-waves in the original ECG signal and the R-wave impulses obtained from the algorithm was found to be less than 100 ms

    Improved filtering methods to suppress cardiovascular contamination in electrical impedance tomography recordings

    Get PDF
    Objective. Electrical impedance tomography (EIT) produces clinical useful visualization of the distribution of ventilation inside the lungs. The accuracy of EIT-derived parameters can be compromised by the cardiovascular signal. Removal of these artefacts is challenging due to spectral overlapping of the ventilatory and cardiovascular signal components and their time-varying frequencies. We designed and evaluated advanced filtering techniques and hypothesized that these would outperform traditional low-pass filters. Approach. Three filter techniques were developed and compared against traditional low-pass filtering: multiple digital notch filtering (MDN), empirical mode decomposition (EMD) and the maximal overlap discrete wavelet transform (MODWT). The performance of the filtering techniques was evaluated (1) in the time domain (2) in the frequency domain (3) by visual inspection. We evaluated the performance using simulated contaminated EIT data and data from 15 adult and neonatal intensive care unit patients. Main result. Each filter technique exhibited varying degrees of effectiveness and limitations. Quality measures in the time domain showed the best performance for MDN filtering. The signal to noise ratio was best for DLP, but at the cost of a high relative and removal error. MDN outbalanced the performance resulting in a good SNR with a low relative and removal error. MDN, EMD and MODWT performed similar in the frequency domain and were successful in removing the high frequency components of the data. Significance. Advanced filtering techniques have benefits compared to traditional filters but are not always better. MDN filtering outperformed EMD and MODWT regarding quality measures in the time domain. This study emphasizes the need for careful consideration when choosing a filtering approach, depending on the dataset and the clinical/research question.</p

    Improved filtering methods to suppress cardiovascular contamination in electrical impedance tomography recordings

    Get PDF
    Objective. Electrical impedance tomography (EIT) produces clinical useful visualization of the distribution of ventilation inside the lungs. The accuracy of EIT-derived parameters can be compromised by the cardiovascular signal. Removal of these artefacts is challenging due to spectral overlapping of the ventilatory and cardiovascular signal components and their time-varying frequencies. We designed and evaluated advanced filtering techniques and hypothesized that these would outperform traditional low-pass filters. Approach. Three filter techniques were developed and compared against traditional low-pass filtering: multiple digital notch filtering (MDN), empirical mode decomposition (EMD) and the maximal overlap discrete wavelet transform (MODWT). The performance of the filtering techniques was evaluated (1) in the time domain (2) in the frequency domain (3) by visual inspection. We evaluated the performance using simulated contaminated EIT data and data from 15 adult and neonatal intensive care unit patients. Main result. Each filter technique exhibited varying degrees of effectiveness and limitations. Quality measures in the time domain showed the best performance for MDN filtering. The signal to noise ratio was best for DLP, but at the cost of a high relative and removal error. MDN outbalanced the performance resulting in a good SNR with a low relative and removal error. MDN, EMD and MODWT performed similar in the frequency domain and were successful in removing the high frequency components of the data. Significance. Advanced filtering techniques have benefits compared to traditional filters but are not always better. MDN filtering outperformed EMD and MODWT regarding quality measures in the time domain. This study emphasizes the need for careful consideration when choosing a filtering approach, depending on the dataset and the clinical/research question.</p

    Lossless and low-cost integer-based lifting wavelet transform

    Get PDF
    Discrete wavelet transform (DWT) is a powerful tool for analyzing real-time signals, including aperiodic, irregular, noisy, and transient data, because of its capability to explore signals in both the frequency- and time-domain in different resolutions. For this reason, they are used extensively in a wide number of applications in image and signal processing. Despite the wide usage, the implementation of the wavelet transform is usually lossy or computationally complex, and it requires expensive hardware. However, in many applications, such as medical diagnosis, reversible data-hiding, and critical satellite data, lossless implementation of the wavelet transform is desirable. It is also important to have more hardware-friendly implementations due to its recent inclusion in signal processing modules in system-on-chips (SoCs). To address the need, this research work provides a generalized implementation of a wavelet transform using an integer-based lifting method to produce lossless and low-cost architecture while maintaining the performance close to the original wavelets. In order to achieve a general implementation method for all orthogonal and biorthogonal wavelets, the Daubechies wavelet family has been utilized at first since it is one of the most widely used wavelets and based on a systematic method of construction of compact support orthogonal wavelets. Though the first two phases of this work are for Daubechies wavelets, they can be generalized in order to apply to other wavelets as well. Subsequently, some techniques used in the primary works have been adopted and the critical issues for achieving general lossless implementation have solved to propose a general lossless method. The research work presented here can be divided into several phases. In the first phase, low-cost architectures of the Daubechies-4 (D4) and Daubechies-6 (D6) wavelets have been derived by applying the integer-polynomial mapping. A lifting architecture has been used which reduces the cost by a half compared to the conventional convolution-based approach. The application of integer-polynomial mapping (IPM) of the polynomial filter coefficient with a floating-point value further decreases the complexity and reduces the loss in signal reconstruction. Also, the “resource sharing” between lifting steps results in a further reduction in implementation costs and near-lossless data reconstruction. In the second phase, a completely lossless or error-free architecture has been proposed for the Daubechies-8 (D8) wavelet. Several lifting variants have been derived for the same wavelet, the integer mapping has been applied, and the best variant is determined in terms of performance, using entropy and transform coding gain. Then a theory has been derived regarding the impact of scaling steps on the transform coding gain (GT). The approach results in the lowest cost lossless architecture of the D8 in the literature, to the best of our knowledge. The proposed approach may be applied to other orthogonal wavelets, including biorthogonal ones to achieve higher performance. In the final phase, a general algorithm has been proposed to implement the original filter coefficients expressed by a polyphase matrix into a more efficient lifting structure. This is done by using modified factorization, so that the factorized polyphase matrix does not include the lossy scaling step like the conventional lifting method. This general technique has been applied on some widely used orthogonal and biorthogonal wavelets and its advantages have been discussed. Since the discrete wavelet transform is used in a vast number of applications, the proposed algorithms can be utilized in those cases to achieve lossless, low-cost, and hardware-friendly architectures

    Biomedical Applications of the Discrete Wavelet Transform

    Get PDF

    Novel hybrid extraction systems for fetal heart rate variability monitoring based on non-invasive fetal electrocardiogram

    Get PDF
    This study focuses on the design, implementation and subsequent verification of a new type of hybrid extraction system for noninvasive fetal electrocardiogram (NI-fECG) processing. The system designed combines the advantages of individual adaptive and non-adaptive algorithms. The pilot study reviews two innovative hybrid systems called ICA-ANFIS-WT and ICA-RLS-WT. This is a combination of independent component analysis (ICA), adaptive neuro-fuzzy inference system (ANFIS) algorithm or recursive least squares (RLS) algorithm and wavelet transform (WT) algorithm. The study was conducted on clinical practice data (extended ADFECGDB database and Physionet Challenge 2013 database) from the perspective of non-invasive fetal heart rate variability monitoring based on the determination of the overall probability of correct detection (ACC), sensitivity (SE), positive predictive value (PPV) and harmonic mean between SE and PPV (F1). System functionality was verified against a relevant reference obtained by an invasive way using a scalp electrode (ADFECGDB database), or relevant reference obtained by annotations (Physionet Challenge 2013 database). The study showed that ICA-RLS-WT hybrid system achieve better results than ICA-ANFIS-WT. During experiment on ADFECGDB database, the ICA-RLS-WT hybrid system reached ACC > 80 % on 9 recordings out of 12 and the ICA-ANFIS-WT hybrid system reached ACC > 80 % only on 6 recordings out of 12. During experiment on Physionet Challenge 2013 database the ICA-RLS-WT hybrid system reached ACC > 80 % on 13 recordings out of 25 and the ICA-ANFIS-WT hybrid system reached ACC > 80 % only on 7 recordings out of 25. Both hybrid systems achieve provably better results than the individual algorithms tested in previous studies.Web of Science713178413175
    corecore