2,631 research outputs found

    Performance Analysis of Fetal-Phonocardiogram Signal Denoising Using The Discrete Wavelet Transform

    Get PDF
    The obligation for comprehensive fetal heart rate investigation had driven to improve the passive and non-invasive diagnostic instruments despite the USG or CTG method. Fetal phonocardiography (f-PCG) utilizing the auscultation method met the above criteria, but its interpretation frequently disturbed by the presence of noise. For instance, maternal heart and body organ sounds, fetal movements noise, and ambient noise from the environment where it is recording are the noise that corrupted the f-PCG signal. In this work, the use of discrete wavelet transforms (DWT) to eliminate noise in the f-PCG signal with SNR as the performance parameters observed. It was observing the effect of changes in wavelet type and threshold type on the SNR value. The test was carried out on f-PCG data taken from physio.net. Initial SNR values ranged from -26.7 dB to -4.4 dB; after application of DWT procedure to f-PCG, SNR increased significantly. Based on the test results obtained, wavelet type coif1 with the soft threshold gave the best result with 11.69 dB in SNR value. The coif1 had a superior result than other mother wavelets that use in this work, so the fPCG signal analysis for fetal heart rate investigation suggested to use it.The obligation for comprehensive fetal heart rate investigation had driven to improve the passive and non-invasive diagnostic instruments despite the USG or CTG method. Fetal phonocardiography (f-PCG) utilizing the auscultation method met the above criteria, but its interpretation frequently disturbed by the presence of noise. For instance, maternal heart and body organ sounds, fetal movements noise, and ambient noise from the environment where it is recording are the noise that corrupted the f-PCG signal. In this work, the use of discrete wavelet transforms (DWT) to eliminate noise in the f-PCG signal with SNR as the performance parameters observed. It was observing the effect of changes in wavelet type and threshold type on the SNR value. The test was carried out on f-PCG data taken from physio.net. Initial SNR values ranged from -26.7 dB to -4.4 dB; after application of DWT procedure to f-PCG, SNR increased significantly. Based on the test results obtained, wavelet type coif1 with the soft threshold gave the best result with 11.69 dB in SNR value. The coif1 had a superior result than other mother wavelets that use in this work, so the fPCG signal analysis for fetal heart rate investigation suggested to use it

    A flexible hardware architecture for 2-D discrete wavelet transform: design and FPGA implementation

    Get PDF
    The Discrete Wavelet Transform (DWT) is a powerful signal processing tool that has recently gained widespread acceptance in the field of digital image processing. The multiresolution analysis provided by the DWT addresses the shortcomings of the Fourier Transform and its derivatives. The DWT has proven useful in the area of image compression where it replaces the Discrete Cosine Transform (DCT) in new JPEG2000 and MPEG4 image and video compression standards. The Cohen-Daubechies-Feauveau (CDF) 5/3 and CDF 9/7 DWTs are used for reversible lossless and irreversible lossy compression encoders in the JPEG2000 standard respectively. The design and implementation of a flexible hardware architecture for the 2-D DWT is presented in this thesis. This architecture can be configured to perform both the forward and inverse DWT for any DWTfamily, using fixed-point arithmetic and no auxiliary memory. The Lifting Scheme method is used to perform the DWT instead of the less efficient convolution-based methods. The DWT core is modeled using MATLAB and highly parameterized VHDL. The VHDL model is synthesized to a Xilinx FPGA to prove hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons throughout this thesis. The DWT core is used in conjunction with a very simple image denoising module to demonstrate the potential of the DWT core to perform image processing techniques. The CDF 5/3 hardware produces identical results to its theoretical MATLAB model. The fixed point CDF 9/7 deviates very slightly from its floating-point MATLAB model with a ~59dB PSNR deviation for nine levels of DWT decomposition. The execution time for performing both DWTs is nearly identical at -14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is -16,000 gates using only 5% of the Xilinx FPGA hardware area, 2.185 MHz maximum clock speed and 24 mW power consumption. The simple wavelet image denoising techniques resulted in cleaned images up to -27 PSNR

    On-Line Learning and Wavelet-Based Feature Extraction Methodology for Process Monitoring using High-Dimensional Functional Data

    Get PDF
    The recent advances in information technology, such as the various automatic data acquisition systems and sensor systems, have created tremendous opportunities for collecting valuable process data. The timely processing of such data for meaningful information remains a challenge. In this research, several data mining methodology that will aid information streaming of high-dimensional functional data are developed. For on-line implementations, two weighting functions for updating support vector regression parameters were developed. The functions use parameters that can be easily set a priori with the slightest knowledge of the data involved and have provision for lower and upper bounds for the parameters. The functions are applicable to time series predictions, on-line predictions, and batch predictions. In order to apply these functions for on-line predictions, a new on-line support vector regression algorithm that uses adaptive weighting parameters was presented. The new algorithm uses varying rather than fixed regularization constant and accuracy parameter. The developed algorithm is more robust to the volume of data available for on-line training as well as to the relative position of the available data in the training sequence. The algorithm improves prediction accuracy by reducing uncertainty in using fixed values for the regression parameters. It also improves prediction accuracy by reducing uncertainty in using regression values based on some experts’ knowledge rather than on the characteristics of the incoming training data. The developed functions and algorithm were applied to feedwater flow rate data and two benchmark time series data. The results show that using adaptive regression parameters performs better than using fixed regression parameters. In order to reduce the dimension of data with several hundreds or thousands of predictors and enhance prediction accuracy, a wavelet-based feature extraction procedure called step-down thresholding procedure for identifying and extracting significant features for a single curve was developed. The procedure involves transforming the original spectral into wavelet coefficients. It is based on multiple hypothesis testing approach and it controls family-wise error rate in order to guide against selecting insignificant features without any concern about the amount of noise that may be present in the data. Therefore, the procedure is applicable for data-reduction and/or data-denoising. The procedure was compared to six other data-reduction and data-denoising methods in the literature. The developed procedure is found to consistently perform better than most of the popular methods and performs at the same level with the other methods. Many real-world data with high-dimensional explanatory variables also sometimes have multiple response variables; therefore, the selection of the fewest explanatory variables that show high sensitivity to predicting the response variable(s) and low sensitivity to the noise in the data is important for better performance and reduced computational burden. In order to select the fewest explanatory variables that can predict each of the response variables better, a two-stage wavelet-based feature extraction procedure is proposed. The first stage uses step-down procedure to extract significant features for each of the curves. Then, representative features are selected out of the extracted features for all curves using voting selection strategy. Other selection strategies such as union and intersection were also described and implemented. The essence of the first stage is to reduce the dimension of the data without any consideration for whether or not they can predict the response variables accurately. The second stage uses Bayesian decision theory approach to select some of the extracted wavelet coefficients that can predict each of the response variables accurately. The two stage procedure was implemented using near-infrared spectroscopy data and shaft misalignment data. The results show that the second stage further reduces the dimension and the prediction results are encouraging

    Adaptive wavelet thresholding with robust hybrid features for text-independent speaker identification system

    Get PDF
    The robustness of speaker identification system over additive noise channel is crucial for real-world applications. In speaker identification (SID) systems, the extracted features from each speech frame are an essential factor for building a reliable identification system. For clean environments, the identification system works well; in noisy environments, there is an additive noise, which is affect the system. To eliminate the problem of additive noise and to achieve a high accuracy in speaker identification system a proposed algorithm for feature extraction based on speech enhancement and a combined features is presents. In this paper, a wavelet thresholding pre-processing stage, and feature warping (FW) techniques are used with two combined features named power normalized cepstral coefficients (PNCC) and gammatone frequency cepstral coefficients (GFCC) to improve the identification system robustness against different types of additive noises. Universal Background Model Gaussian Mixture Model (UBM-GMM) is used for features matching between the claim and actual speakers. The results showed performance improvement for the proposed feature extraction algorithm of identification system comparing with conventional features over most types of noises and different SNR ratios

    On the block thresholding wavelet estimators with censored data

    Get PDF
    AbstractWe consider block thresholding wavelet-based density estimators with randomly right-censored data and investigate their asymptotic convergence rates. Unlike for the complete data case, the empirical wavelet coefficients are constructed through the Kaplan–Meier estimators of the distribution functions in the censored data case. On the basis of a result of Stute [W. Stute, The central limit theorem under random censorship, Ann. Statist. 23 (1995) 422–439] that approximates the Kaplan–Meier integrals as averages of i.i.d. random variables with a certain rate in probability, we can show that these wavelet empirical coefficients can be approximated by averages of i.i.d. random variables with a certain error rate in L2. Therefore we can show that these estimators, based on block thresholding of empirical wavelet coefficients, achieve optimal convergence rates over a large range of Besov function classes Bp,qs,s>1/p, p≥2, q≥1 and nearly optimal convergence rates when 1≤p<2. We also show that these estimators achieve optimal convergence rates over a large class of functions that involve many irregularities of a wide variety of types, including chirp and Doppler functions, and jump discontinuities. Therefore, in the presence of random censoring, wavelet estimators still provide extensive adaptivity to many irregularities of large function classes. The performance of the estimators is tested via a modest simulation study

    COMPUTER AIDED SYSTEM FOR BREAST CANCER DIAGNOSIS USING CURVELET TRANSFORM

    Get PDF
    Breast cancer is a leading cause of death among women worldwide. Early detection is the key for improving breast cancer prognosis. Digital mammography remains one of the most suitable tools for early detection of breast cancer. Hence, there are strong needs for the development of computer aided diagnosis (CAD) systems which have the capability to help radiologists in decision making. The main goal is to increase the diagnostic accuracy rate. In this thesis we developed a computer aided system for the diagnosis and detection of breast cancer using curvelet transform. Curvelet is a multiscale transform which possess directionality and anisotropy, and it breaks some inherent limitations of wavelet in representing edges in images. We started this study by developing a diagnosis system. Five feature extraction methods were developed with curvelet and wavelet coefficients to differentiate between different breast cancer classes. The results with curvelet and wavelet were compared. The experimental results show a high performance of the proposed methods and classification accuracy rate achieved 97.30%. The thesis then provides an automatic system for breast cancer detection. An automatic thresholding algorithm was used to separate the area composed of the breast and the pectoral muscle from the background of the image. Subsequently, a region growing algorithm was used to locate the pectoral muscle and suppress it from the breast. Then, the work concentrates on the segmentation of region of interest (ROI). Two methods are suggested to accomplish the segmentation stage: an adaptive thresholding method and a pattern matching method. Once the ROI has been identified, an automatic cropping is performed to extract it from the original mammogram. Subsequently, the suggested feature extraction methods were applied to the segmented ROIs. Finally, the K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) classifiers were used to determine whether the region is abnormal or normal. At this level, the study focuses on two abnormality types (mammographic masses and architectural distortion). Experimental results show that the introduced methods have very high detection accuracies. The effectiveness of the proposed methods has been tested with Mammographic Image Analysis Society (MIAS) dataset. Throughout the thesis all proposed methods and algorithms have been applied with both curvelet and wavelet for comparison and statistical tests were also performed. The overall results show that curvelet transform performs better than wavelet and the difference is statistically significant

    High-Order Sparsity Exploiting Methods with Applications in Imaging and PDEs

    Get PDF
    abstract: High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI and SAR provide data in terms of Fourier coefficients, and thus prescribe a natural high-order basis. The field of compressed sensing has introduced a set of techniques based on â„“1\ell^1 regularization that promote sparsity and facilitate working with functions having discontinuities. In this dissertation, high-order methods and â„“1\ell^1 regularization are used to address three problems: reconstructing piecewise smooth functions from sparse and and noisy Fourier data, recovering edge locations in piecewise smooth functions from sparse and noisy Fourier data, and reducing time-stepping constraints when numerically solving certain time-dependent hyperbolic partial differential equations.Dissertation/ThesisDoctoral Dissertation Applied Mathematics 201

    Motion compensation and very low bit rate video coding

    Get PDF
    Recently, many activities of the International Telecommunication Union (ITU) and the International Standard Organization (ISO) are leading to define new standards for very low bit-rate video coding, such as H.263 and MPEG-4 after successful applications of the international standards H.261 and MPEG-1/2 for video coding above 64kbps. However, at very low bit-rate the classic block matching based DCT video coding scheme suffers seriously from blocking artifacts which degrade the quality of reconstructed video frames considerably. To solve this problem, a new technique in which motion compensation is based on dense motion field is presented in this dissertation. Four efficient new video coding algorithms based on this new technique for very low bit-rate are proposed. (1) After studying model-based video coding algorithms, we propose an optical flow based video coding algorithm with thresh-olding techniques. A statistic model is established for distribution of intensity difference between two successive frames, and four thresholds are used to control the bit-rate and the quality of reconstructed frames. It outperforms the typical model-based techniques in terms of complexity and quality of reconstructed frames. (2) An efficient algorithm using DCT coded optical flow. It is found that dense motion fields can be modeled as the first order auto-regressive model, and efficiently compressed with DCT technique, hence achieving very low bit-rate and higher visual quality than the H.263/TMN5. (3) A region-based discrete wavelet transform video coding algorithm. This algorithm implements dense motion field and regions are segmented according to their content significance. The DWT is applied to residual images region by region, and bits are adaptively allocated to regions. It improves the visual quality and PSNR of significant regions while maintaining low bit-rate. (4) A segmentation-based video coding algorithm for stereo sequence. A correlation-feedback algorithm with Kalman filter is utilized to improve the accuracy of optical flow fields. Three criteria, which are associated with 3-D information, 2-D connectivity and motion vector fields, respectively, are defined for object segmentation. A chain code is utilized to code the shapes of the segmented objects. it can achieve very high compression ratio up to several thousands
    • …
    corecore