181 research outputs found

    The severity of stages estimation during hemorrhage using error correcting output codes method

    Get PDF
    As a beneficial component with critical impact, computer-aided decision making systems have infiltrated many fields, such as economics, medicine, architecture and agriculture. The latent capabilities for facilitating human work propel high-speed development of such systems. Effective decisions provided by such systems greatly reduce the expense of labor, energy, budget, etc. The computer-aided decision making system for traumatic injuries is one type of such systems that supplies suggestive opinions when dealing with the injuries resulted from accidents, battle, or illness. The functions may involve judging the type of illness, allocating the wounded according to battle injuries, deciding the severity of symptoms for illness or injuries, managing the resources in the context of traumatic events, etc. The proposed computer-aided decision making system aims at estimating the severity of blood volume loss. Specifically speaking, accompanying many traumatic injuries, severe hemorrhage, a potentially life-threatening condition that requires immediate treatment, is a significant loss of blood volume in process resulting in decreased blood and oxygen perfusion of vital organs. Hemorrhage and blood loss can occur in different levels such as mild, moderate, or severe. Our proposed system will assist physicians by estimating information such as the severity of blood volume loss and hemorrhage , so that timely measures can be taken to not only save lives but also reduce the long-term complications as well as the cost caused by unmatched operations and treatments. The general framework of the proposed research contains three tasks and many novel and transformative concepts are integrated into the system. First is the preprocessing of the raw signals. In this stage, adaptive filtering is adopted and customized to filter noise, and two detection algorithms (QRS complex detection and Systolic/Diastolic wave detection) are designed. The second process is to extract features. The proposed system combines features from time domain, frequency domain, nonlinear analysis, and multi-model analysis to better represent the patterns when hemorrhage happens. Third, a machine learning algorithm is designed for classification of patterns. A novel machine learning algorithm, as a new version of error correcting output code (ECOC), is designed and investigated for high accuracy and real-time decision making. The features and characteristics of this machine learning method are essential for the proposed computer-aided trauma decision making system. The proposed system is tested agasint Lower Body Negative Pressure (LBNP) dataset, and the results indicate the accuracy and reliability of the proposed system

    Development of a Novel Dataset and Tools for Non-Invasive Fetal Electrocardiography Research

    Get PDF
    This PhD thesis presents the development of a novel open multi-modal dataset for advanced studies on fetal cardiological assessment, along with a set of signal processing tools for its exploitation. The Non-Invasive Fetal Electrocardiography (ECG) Analysis (NInFEA) dataset features multi-channel electrophysiological recordings characterized by high sampling frequency and digital resolution, maternal respiration signal, synchronized fetal trans-abdominal pulsed-wave Doppler (PWD) recordings and clinical annotations provided by expert clinicians at the time of the signal collection. To the best of our knowledge, there are no similar dataset available. The signal processing tools targeted both the PWD and the non-invasive fetal ECG, exploiting the recorded dataset. About the former, the study focuses on the processing aimed at the preparation of the signal for the automatic measurement of relevant morphological features, already adopted in the clinical practice for cardiac assessment. To this aim, a relevant step is the automatic identification of the complete and measurable cardiac cycles in the PWD videos: a rigorous methodology was deployed for the analysis of the different processing steps involved in the automatic delineation of the PWD envelope, then implementing different approaches for the supervised classification of the cardiac cycles, discriminating between complete and measurable vs. malformed or incomplete ones. Finally, preliminary measurement algorithms were also developed in order to extract clinically relevant parameters from the PWD. About the fetal ECG, this thesis concentrated on the systematic analysis of the adaptive filters performance for non-invasive fetal ECG extraction processing, identified as the reference tool throughout the thesis. Then, two studies are reported: one on the wavelet-based denoising of the extracted fetal ECG and another one on the fetal ECG quality assessment from the analysis of the raw abdominal recordings. Overall, the thesis represents an important milestone in the field, by promoting the open-data approach and introducing automated analysis tools that could be easily integrated in future medical devices

    CVAR-Seg: An Automated Signal Segmentation Pipeline for Conduction Velocity and Amplitude Restitution

    Get PDF
    Background: Rate-varying S1S2 stimulation protocols can be used for restitution studies to characterize atrial substrate, ionic remodeling, and atrial fibrillation risk. Clinical restitution studies with numerous patients create large amounts of these data. Thus, an automated pipeline to evaluate clinically acquired S1S2 stimulation protocol data necessitates consistent, robust, reproducible, and precise evaluation of local activation times, electrogram amplitude, and conduction velocity. Here, we present the CVAR-Seg pipeline, developed focusing on three challenges: (i) No previous knowledge of the stimulation parameters is available, thus, arbitrary protocols are supported. (ii) The pipeline remains robust under different noise conditions. (iii) The pipeline supports segmentation of atrial activities in close temporal proximity to the stimulation artifact, which is challenging due to larger amplitude and slope of the stimulus compared to the atrial activity. Methods and Results: The S1 basic cycle length was estimated by time interval detection. Stimulation time windows were segmented by detecting synchronous peaks in different channels surpassing an amplitude threshold and identifying time intervals between detected stimuli. Elimination of the stimulation artifact by a matched filter allowed detection of local activation times in temporal proximity. A non-linear signal energy operator was used to segment periods of atrial activity. Geodesic and Euclidean inter electrode distances allowed approximation of conduction velocity. The automatic segmentation performance of the CVAR-Seg pipeline was evaluated on 37 synthetic datasets with decreasing signal-to-noise ratios. Noise was modeled by reconstructing the frequency spectrum of clinical noise. The pipeline retained a median local activation time error below a single sample (1 ms) for signal-to-noise ratios as low as 0 dB representing a high clinical noise level. As a proof of concept, the pipeline was tested on a CARTO case of a paroxysmal atrial fibrillation patient and yielded plausible restitution curves for conduction speed and amplitude. Conclusion: The proposed openly available CVAR-Seg pipeline promises fast, fully automated, robust, and accurate evaluations of atrial signals even with low signal-to-noise ratios. This is achieved by solving the proximity problem of stimulation and atrial activity to enable standardized evaluation without introducing human bias for large data sets

    Artificial Intelligence for Noninvasive Fetal Electrocardiogram Analysis

    Get PDF

    Multimodal Signal Processing for Diagnosis of Cardiorespiratory Disorders

    Get PDF
    This thesis addresses the use of multimodal signal processing to develop algorithms for the automated processing of two cardiorespiratory disorders. The aim of the first application of this thesis was to reduce false alarm rate in an intensive care unit. The goal was to detect five critical arrhythmias using processing of multimodal signals including photoplethysmography, arterial blood pressure, Lead II and augmented right arm electrocardiogram (ECG). A hierarchical approach was used to process the signals as well as a custom signal processing technique for each arrhythmia type. Sleep disorders are a prevalent health issue, currently costly and inconvenient to diagnose, as they normally require an overnight hospital stay by the patient. In the second application of this project, we designed automated signal processing algorithms for the diagnosis of sleep apnoea with a main focus on the ECG signal processing. We estimated the ECG-derived respiratory (EDR) signal using different methods: QRS-complex area, principal component analysis (PCA) and kernel PCA. We proposed two algorithms (segmented PCA and approximated PCA) for EDR estimation to enable applying the PCA method to overnight recordings and rectify the computational issues and memory requirement. We compared the EDR information against the chest respiratory effort signals. The performance was evaluated using three automated machine learning algorithms of linear discriminant analysis (LDA), extreme learning machine (ELM) and support vector machine (SVM) on two databases: the MIT PhysioNet database and the St. Vincent’s database. The results showed that the QRS area method for EDR estimation combined with the LDA classifier was the highest performing method and the EDR signals contain respiratory information useful for discriminating sleep apnoea. As a final step, heart rate variability (HRV) and cardiopulmonary coupling (CPC) features were extracted and combined with the EDR features and temporal optimisation techniques were applied. The cross-validation results of the minute-by-minute apnoea classification achieved an accuracy of 89%, a sensitivity of 90%, a specificity of 88%, and an AUC of 0.95 which is comparable to the best results reported in the literature

    Analysis of Atrial Electrograms

    Get PDF
    This work provides methods to measure and analyze features of atrial electrograms - especially complex fractionated atrial electrograms (CFAEs) - mathematically. Automated classification of CFAEs into clinical meaningful classes is applied and the newly gained electrogram information is visualized on patient specific 3D models of the atria. Clinical applications of the presented methods showed that quantitative measures of CFAEs reveal beneficial information about the underlying arrhythmia

    Median based method for baseline wander removal in photoplethysmogram signals

    Get PDF
    © 2014 IEEE. Removal of baseline wander is a crucial step in the signal conditioning stage of photoplethysmography signals. Hence, a method for removing the baseline wander from photoplethysmography based on two-stages of median filtering is proposed in this paper. Recordings from Physionet database are used to validate the proposed method. In this paper, the twostage moving average filtering is also applied to remove baseline wander in photoplethysmography signals for comparison with our novel two-stage median filtering method. Our experiment results show that the performance of two-stage median filtering method is more effective in removing baseline wander from photoplethysmography signals. This median filtering method effectively improves the cross correlation with minimal distortion of the signal of interest. Although the method is proposed for baseline wander in photoplethysmography signals, it can be applied to other biomedical signals as well
    • …
    corecore