2 research outputs found

    Compressed Segmented Beat Modulation Method using Discrete Cosine Transform

    No full text
    Currently used 24-hour electrocardiogram (ECG) monitors have been shown to skip detecting arrhythmias that may not occur frequently or during standardized ECG test. Hence, online ECG processing and wearable sensing applications have been becoming increasingly popular in the past few years to solve a continuous and long-term ECG monitoring problem. With the increase in the usage of online platforms and wearable devices, there arises a need for increased storage capacity to store and transmit lengthy ECG recordings, offline and over the cloud for continuous monitoring by clinicians. In this work, a discrete cosine transform (DCT) compressed segmented beat modulation method (SBMM) is proposed and its applicability in case of ambulatory ECG monitoring is tested using Massachusetts Institute of Technology-Beth Israel Deaconess Medical Center (MIT-BIH) ECG Compression Test Database containing Holter tape normal sinus rhythm ECG recordings. The method is evaluated using signal-to-noise (SNR) and compression ratio (CR) considering varying levels of signal energy in the reconstructed ECG signal. For denoising, an average SNR of 4.56 dB was achieved representing an average overall decline of 1.68 dBs (37.9%) as compared to the uncompressed signal processing while 95 % of signal energy is intact and quantized at 6 bits for signal storage (CR=2) compared to the original 12 bits, hence resulting in 50% reduction in storage size

    Increasing Accuracy Performance through Optimal Feature Extraction Algorithms

    Get PDF
    This research developed models and techniques to improve the three key modules of popular recognition systems: preprocessing, feature extraction, and classification. Improvements were made in four key areas: processing speed, algorithm complexity, storage space, and accuracy. The focus was on the application areas of the face, traffic sign, and speaker recognition. In the preprocessing module of facial and traffic sign recognition, improvements were made through the utilization of grayscaling and anisotropic diffusion. In the feature extraction module, improvements were made in two different ways; first, through the use of mixed transforms and second through a convolutional neural network (CNN) that best fits specific datasets. The mixed transform system consists of various combinations of the Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), which have a reliable track record for image feature extraction. In terms of the proposed CNN, a neuroevolution system was used to determine the characteristics and layout of a CNN to best extract image features for particular datasets. In the speaker recognition system, the improvement to the feature extraction module comprised of a quantized spectral covariance matrix and a two-dimensional Principal Component Analysis (2DPCA) function. In the classification module, enhancements were made in visual recognition through the use of two neural networks: the multilayer sigmoid and convolutional neural network. Results show that the proposed improvements in the three modules led to an increase in accuracy as well as reduced algorithmic complexity, with corresponding reductions in storage space and processing time
    corecore