158 research outputs found

    Modeling of electrocardiogram signals using predefined signature and envelope vector sets

    Get PDF
    A novel method is proposed to model ECG signals by means of "predefined signature and envelope vector sets (PSEVS)." On a frame basis, an ECG signal is reconstructed by multiplying three model parameters, namely, predefined signature vector (PSV)(R)," "predefined envelope vector (PEV)(K)," and frame-scaling coefficient (FSC). All the PSVs and PEVs are labeled and stored in their respective sets to describe the signal in the reconstruction process. In this case, an ECG signal frame is modeled by means of the members of these sets labeled with indices R and K and the frame-scaling coefficient, in the least mean square sense. The proposed method is assessed through the use of percentage root-mean-square difference (PRD) and visual inspection measures. Assessment results reveal that the proposed method provides significant data compression ratio (CR) with low-level PRD values while preserving diagnostic information. This fact significantly reduces the bandwidth of communication in telediagnosis operations. Copyright (c) 2007 Hakan Gurkan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Publisher's Versio

    Compression of ECG signals using variable-length classified vector sets and wavelet transforms

    Get PDF
    In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.ISIK University [06B302]The author would like to special thank Prof. Siddik Yarman who is Board of Trustees Chairman of the ISIK University and Umit Guz, Assistant Professor at the ISIK University for their valuable contributions and continuous interest in this article. The author also would like to thank Prof. Osman Akdemir who is a cardiologist in the Department of Cardiology at the T. C. Maltepe University and Dr. Ruken Bengi Bakal who is a cardiologist in the Department of Cardiology at the Kartal Kosuyolu Yuksek Ihtisas Education and Research Hospital for their valuable clinical contributions and suggestions and the reviewers for their constructive comments which improved the technical quality and presentation of the article. The present work was supported by the Scientific Research Fund of ISIK University, Project number 06B302.Publisher's Versio

    Modeling Electrocardiogram (ECG) signals via signature and envelope functions

    Get PDF
    Bu çalışmada, EKG işaretlerinin Temel Tanım ve Zarf Fonksiyonları ile modellenmesine yönelik yeni bir yöntem sunulmaktadır. Sunulan yöntem, herhangi bir EKG işaretine ilişkin Xi(t) çerçeve fonksiyonunu  biçiminde modellemektedir. Bu modelde, jR(t), Temel Tanım Fonksiyonu olarak adlandırılmakta ve bir Ci katsayısı ile Xi çerçeve vektörünün en yüksek enerjisini taşımaktadır. aK(t), Zarf Fonksiyonu olarak adlandırılmakta ve Xi çerçeve vektörünün zarfını oluşturmaktadır. Ci katsayısı da Çerçeve Ölçekleme Katsayısı olarak adlandırılmaktadır. Temel Tanım ve Zarf Fonksiyonları iletim bandının herbir düğümüne yerleştirilerek EKG işaretinin iletimi, Temel Tanım ve Zarf Vektör Bankasının R ve K indislerinin ve Ci katsayısının iletimine indirgenerek önemli bir sıkıştırma oranı gerçeklenmiştir.Anahtar Kelimeler: Sıkıştırma, Modelleme, EKG.In this paper, a new method to model ECG signals by means of "Signature and Envelope Functions" is presented. In this work, on a frame basis, any ECG signal Xi(t) is modeled by the form of . In this model, jR(t) is defined as the Signature Function since it carries almost maximum energy of the frame vector Xi with a constant Ci. aK(t) is referred to as Envelope Function since it matches the envelope of CijR(t) to the original frame vector Xi; and Ci is called the Frame-Scaling Coefficient. It has been demonstrated that the sets F={jr(t)} and A={ak(t)} constitute a "Signature and Envelope Functional Banks" to describe any measured ECG signal. Thus, ECG signal for each frame is described in terms of the two indices "R" and "K" of Signature and Envelope Functional Banks and the frame-scaling coefficient Ci. It has been shown that the new method of modeling provides significant data compression with low level reconstruction error while preserving diagnostic information in the reconstructed ECG signal.. Furthermore, once Signature and Envelope Functional Banks are stored on each communication node, transmission of ECG signals reduces to the transmission of indexes "R" and "K" of [ak(t),jr(t)] pairs and the coefficient Ci, which also result in considerable saving in the transmission band.  Keywords: Compression, Modeling, ECG.

    A Novel Image Compression Method Based on Classified Energy and Pattern Building Blocks

    Get PDF
    In this paper, a novel image compression method based on generation of the so-called classified energy and pattern blocks (CEPB) is introduced and evaluation results are presented. The CEPB is constructed using the training images and then located at both the transmitter and receiver sides of the communication system. Then the energy and pattern blocks of input images to be reconstructed are determined by the same way in the construction of the CEPB. This process is also associated with a matching procedure to determine the index numbers of the classified energy and pattern blocks in the CEPB which best represents (matches) the energy and pattern blocks of the input images. Encoding parameters are block scaling coefficient and index numbers of energy and pattern blocks determined for each block of the input images. These parameters are sent from the transmitter part to the receiver part and the classified energy and pattern blocks associated with the index numbers are pulled from the CEPB. Then the input image is reconstructed block by block in the receiver part using a mathematical model that is proposed. Evaluation results show that the method provides considerable image compression ratios and image quality even at low bit rates.The work described in this paper was funded by the Isik University Scientific Research Fund (Project contract no. 10B301). The author would like to thank to Professor B. S. Yarman (Istanbul University, College of Engineering, Department of Electrical-Electronics Engineering), Assistant Professor Hakan Gurkan (Isik University, Engineering Faculty, Department of Electrical-Electronics Engineering), the researchers in the International Computer Science Institute (ICSI), Speech Group, University of California at Berkeley, CA, USA and the researchers in the SRI International, Speech Technology and Research (STAR) Laboratory, Menlo Park, CA, USA for many helpful discussions on this work during his postdoctoral fellow years. The author also would like to thank the anonymous reviewers for their valuable comments and suggestions which substantially improved the quality of this paperPublisher's Versio

    Analysis of Passive Magnetic Inspection Signals Using the Haar Wavelet and Asymmetric Gaussian Chirplet Model (AGCM)

    Get PDF
    Nowadays, Non-Destructive Testing (NDT) techniques are an essential foundation of infrastructure retrofit and rehabilitation plans, mainly due to the huge amount of construction, as well as the high cost of demolition and reconstruction. Modern NDT methods are moving toward automated detection methods to increase the speed and probability of detection, which enlarges the size of inspection data and raises the demand for new data analysis methods. NDT methods are divided into two main groups; active and passive. The external potentials are discharged into an object in an active method, and then the reflection wave is recorded. However, the passive methods use the self-created magnetic field of the object. Therefore, the magnetic value of ferromagnetic material in a passive method is less than the magnetic value of an active method, and defects and anomalies detection needs more variety of functional signal processing methods. The Passive Magnetic Inspection (PMI) method, as an NDT-passive technology, is used in this thesis for ferromagnetic materials quantitative assessment. The success of the PMI depends on the detection of anomalies of the passive magnetic signals, which is different for every single test. This research aims to develop appropriate signal processing methods to enhance the PMI quality of defect detection in ferromagnetic materials. This thesis has two main parts and presents two computer-based inspection data analysis methods based on the Haar wavelet and the Asymmetric Gaussian Chriplet Model (AGCM). The Passive Magnetic Inspection method (PMI) is used to scan ferromagnetic materials and produce the raw magnetic data analyzed by the Haar wavelet and AGCM. The first part of this study describes the Haar wavelet method for rebar defect detection. The Haar wavelet is used to analyze the PMI magnetic data of the embedded reinforcement steel rebar. The corrugated surface of reinforcing steel makes the detection of defects harder than in flat plates. The up and down shape of the Haar wavelet function can filter the repeating corrugations effect of steel rebars on the PMI signal and thereby better identify the defects. Toogood Pond Dam piers’ rebar defects, as a case study, were detected using the Haar wavelet analysis and verified by the Absolute Gradient (AG) method using visual comparison of the resultant signals and the correlation coefficient. The predicted number of points with a rebar area loss higher than 4% is generally the same with the AG and the Haar wavelet methods. The mean correlation coefficient between the signals analyzed using the AG and the Haar wavelet for all rebars is 0.8. In the second part of this study the use of the AGCM to simulate PMI signals is investigated. Three rail samples were scanned to extract a three-dimensional magnetic field along specific PMI transit lines of each sample for the AGCM simulations. Errors, defined as the absolute value of the difference between signal and simulation, were considered as a measure of simulation accuracy in each direction. The samples’ lengths differed, therefore error values were normalized with respect to the length to scale data for the three samples. The Simulation Error Factor (SEF) was used to measure the error and sample 3 showed the lower value. Finally, statistical properties of the samples' SEF, such as standard deviation and covariance, were evaluated, and the best distribution was fitted to each of the data sets based on the Probability Paper Plot (PPP) method. The Log-Normal probability distribution demonstrated the best compatibility with SEF values. These distributions and statistical properties help to detect outlier data for future data sets and to identify defects

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods

    Proceedings of 3. International Conference on Artificial Intelligence towards Industry 4.0 (ICAII4’2020)

    Get PDF
    Çevrimiçi ( XIV, 67 pages

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    Epileptic Seizure Detection And Prediction From Electroencephalogram Using Neuro-Fuzzy Algorithms

    Get PDF
    This dissertation presents innovative approaches based on fuzzy logic in epileptic seizure detection and prediction from Electroencephalogram (EEG). The fuzzy rule-based algorithms were developed with the aim to improve quality of life of epilepsy patients by utilizing intelligent methods. An adaptive fuzzy logic system was developed to detect seizure onset in a patient specific way. Fuzzy if-then rules were developed to mimic the human reasoning and taking advantage of the combination in spatial-temporal domain. Fuzzy c-means clustering technique was utilized for optimizing the membership functions for varying patterns in the feature domain. In addition, application of the adaptive neuro-fuzzy inference system (ANFIS) is presented for efficient classification of several commonly arising artifacts from EEG. Finally, we present a neuro-fuzzy approach of seizure prediction by applying the ANFIS. Patient specific ANFIS classifier was constructed to forecast a seizure followed by postprocessing methods. Three nonlinear seizure predictive features were used to characterize changes prior to seizure. The nonlinear features used in this study were similarity index, phase synchronization, and nonlinear interdependence. The ANFIS classifier was constructed based on these features as inputs. Fuzzy if-then rules were generated by the ANFIS classifier using the complex relationship of feature space provided during training. In this dissertation, the application of the neuro-fuzzy algorithms in epilepsy diagnosis and treatment was demonstrated by applying the methods on different datasets. Several performance measures such as detection delay, sensitivity and specificity were calculated and compared with results reported in literature. The proposed algorithms have potentials to be used in diagnostics and therapeutic applications as they can be implemented in an implantable medical device to detect a seizure, forecast a seizure, and initiate neurostimulation therapy for the purpose of seizure prevention or abortion
    corecore