124 research outputs found

    Intelligibility Enhancement of Synthetic Speech: A Review

    Get PDF
    Current method of speech enhancement has been developed with adaptive filtering approach. The adaptive filter utilizes the least mean square algorithm for noise removal, but in LMS algorithm key parameter is step size. When step size is large speed and least mean square error is large and it is that computational cost increases to an undesirable level as the length of the impulse response increases. This paper provide a detail review on existing methodologies on enhancement on synthetic speech

    Filtering electrocardiographic signals using an unbiased and normalized adaptive noise reduction system

    Get PDF
    We present a novel unbiased and normalized adaptive noise reduction (UNANR) system to suppress random noise in electrocardiographic (ECG) signals. The system contains procedures for the removal of baseline wander with a two-stage moving-average filter, comb filtering of power-line interference with an infinite impulse response (IIR) comb filter, an additive white noise generator to test the system's performance in terms of signal-to-noise ratio (SNR), and the UNANR model that is used to estimate the noise which is subtracted from the contaminated ECG signals. The UNANR model does not contain a bias unit, and the coefficients are adaptively updated by using the steepest-descent algorithm. The corresponding adaptation process is designed to minimize the instantaneous error between the estimated signal power and the desired noise-free signal power. The benchmark MIT-BIH arrhythmia database was used to evaluate the performance of the UNANR system with different levels of input noise. The results of adaptive filtering and a study on convergence of the UNANR learning rate demonstrate that the adaptive noise-reduction system that includes the UNANR model can effectively eliminate random noise in ambulatory ECG recordings., leading to a higher SNR improvement than that with the same system using the popular least-mean-square (LMS) filter. The SNR improvement provided by the proposed UNANR system was higher than that provided by the system with the LMS filter, with the input SNR in the range of 5-20 dB over the 48 ambulatory ECG recordings tested. Crown Copyright (C) 2008 Published by Elsevier Ltd on behalf of IPEM. All rights reserved

    Filtering of noise in electrocardiographic signals using an unbiased and normalized adaptive artifact cancellation system

    Get PDF
    The electrocardiogram (ECG) is routinely used for the diagnosis of cardiovascular diseases. The removal of artifacts in ambulatory ECG recordings is essential in many biomedical applications. In this paper, we present the design of an unbiased linear filter with normalized weight coefficients in an adaptive artifact cancellation (UNAAC) system. We also develop a new weight coefficient adaptation algorithm that normalizes the filter coefficients, and utilize the steepest-descent algorithm to effectively cancel the artifacts present in ECG signals. The proposed UNAAC system was tested through experiments on the benchmark MIT-BIH database. Empirical results demonstrate that the UNAAC system can effectively eliminate two types of predominant artifacts: baseline wander and muscle-contraction artifact. Furthermore, the proposed UNAAC system achieved significantly higher signal-to-noise and signal-to-error ratios in the enhanced ECG signals, as compared with the normalized least-mean-square (NLMS) filter

    Cancellation of artifacts in ECG signals using a normalized adaptive neural filter

    Get PDF
    Denoising electrocardiographic (ECG) signals is an essential procedure prior to their analysis. In this paper, we present a normalized adaptive neural filter (NANF) for cancellation of artifacts in ECG signals. The normalized filter coefficients are updated by the steepest-descent algorithm; the adaptation process is designed to minimize the difference between second-order estimated output values and the desired artifact-free ECG signals. Empirical results with benchmark data show that the adaptive artifact canceller that includes the NANF can effectively remove muscle-contraction artifacts and high-frequency noise in ambulatory ECG recordings, leading to a high signal-to-noise ratio. Moreover, the performance of the NANF in terms of the root-mean-squared error, normalized correlation coefficient, and filtered artifact entropy is significantly better than that of the popular least-mean-square (LMS) filter

    A study on adaptive filtering for noise and echo cancellation.

    Get PDF
    The objective of this thesis is to investigate the adaptive filtering technique on the application of noise and echo cancellation. As a relatively new area in Digital Signal Processing (DSP), adaptive filters have gained a lot of popularity in the past several decades due to the advantages that they can deal with time-varying digital system and they do not require a priori knowledge of the statistics of the information to be processed. Adaptive filters have been successfully applied in a great many areas such as communications, speech processing, image processing, and noise/echo cancellation. Since Bernard Widrow and his colleagues introduced adaptive filter in the 1960s, many researchers have been working on noise/echo cancellation by using adaptive filters with different algorithms. Among these algorithms, normalized least mean square (NLMS) provides an efficient and robust approach, in which the model parameters are obtained on the base of mean square error (MSE). The choice of a structure for the adaptive filters also plays an important role on the performance of the algorithm as a whole. For this purpose, two different filter structures: finite impulse response (FIR) filter and infinite impulse response (IIR) filter have been studied. The adaptive processes with two kinds of filter structures and the aforementioned algorithm have been implemented and simulated using Matlab.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .J53. Source: Masters Abstracts International, Volume: 44-01, page: 0472. Thesis (M.A.Sc.)--University of Windsor (Canada), 2005

    Numerical and statistical time series analysis of fetal heart rate

    Get PDF

    Reconstructing Electrocardiogram Leads From a Reduced Lead Set Through the Use of Patient-Specific Transforms and Independent Component Analysis

    Get PDF
    In this exploration into electrocardiogram (ECG) lead reconstruction, two algorithms were developed and tested on a public database and in real-time on patients. These algorithms were based on independent component analysis (ICA). ICA was a promising method due to its implications for spatial independence of lead placement and its adaptive nature to changing orientation of the heart in relation to the electrodes. The first algorithm was used to reconstruct missing precordial leads, which has two key applications. The first is correcting precordial lead measurements in a standard 12-lead configuration. If an irregular signal or high level of noise is detected on a precordial lead, the obfuscated signal can be calculated from other nearby leads. The second is the reduction in the number of precordial leads required for accurate measurement, which opens up the surface of the chest above the heart for diagnostic procedures. Using only two precordial leads, the other four were reconstructed with a high degree of accuracy. This research was presented at the 33rd International Conference of the IEEE Engineering in Medicine and Biology Society in 2011.1 The second algorithm was developed to construct a full 12-lead clinical ECG from either three differential measurements or three standard leads. By utilizing differential measurements, the ECG could be reconstructed using wireless systems, which lack the common ground necessary for the standard measurement method. Using three leads distributed across the expanse of the space of the heart, all twelve leads were successfully reconstructed and compared against state of the art algorithms. This work has been accepted for publication in the IEEE Journal of Biomedical and Health Informatics.2 These algorithms show a proof of concept, one which can be further honed to deal with the issues of sorting independent components and improving the training sequences. This research also revealed the possibility of extracting and monitoring additional physiological information, such as a patient\u27s breathing rate from currently utilized ECG systems

    Machine learning for patient-adaptive ectopic beat classification

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 83-85).Physicians require automated techniques to accurately analyze the vast amount of physiological data collected by continuous monitoring devices. In this thesis, we consider one analysis task in particular, the classification of heartbeats from electrocardiographic recordings (ECG). This problem is made challenging by the inter-patient differences present in ECG morphology and timing characteristics. Supervised classifiers trained on a collection of patients can have unpredictable results when applied to a new patient. To reduce the effect of inter-patient differences, researchers have suggested training patient-adative classifiers by training on labeled data from the test patient. However, patient-adaptive classifiers have not been integrated in practice because they require an impractical amount of patient-specific expert knowledge. We present two approaches based on machine learning for building accurate patientadaptive beat classifiers that use little or no patient-specific expert knowledge. First, we present a method to transfer and adapt knowledge from a collection of patients to a test-patient. This first approach, based on transductive transfer learning, requires no patient-specific labeled data, only labeled data from other patients. Second, we consider the scenario where patient-specific expert knowledge is available, but comes at a high cost. We present a novel algorithm for SVM active learning. By intelligently selecting the training set we show how one can build highly accurate patient-adaptive classifiers using only a small number of cardiologist supplied labels. Our results show the gains in performance possible when using patient-adaptive classifiers in place of global classifiers. Furthermore, the effectiveness of our techniques, which use little or no patient-specific expert knowledge, suggest that it is also practical to use patient-adaptive techniques in a clinical setting.by Jenna Marleau Wiens.S.M
    corecore