45,397 research outputs found

    Analyse des signaux AM-FM basée sur une version B-splines de l'EMD-ESA

    Get PDF
    In this paper a signal analysis framework for estimating time-varying amplitude and frequency functions of multicomponent amplitude and frequency modulated (AM–FM) signals is introduced. This framework is based on local and non-linear approaches, namely Energy Separation Algorithm (ESA) and Empirical Mode Decomposition (EMD). Conjunction of Discrete ESA (DESA) and EMD is called EMD–DESA. A new modified version of EMD where smoothing instead of an interpolation to construct the upper and lower envelopes of the signal is introduced. Since extracted IMFs are represented in terms of B-spline (BS) expansions, a closed formula of ESA robust against noise is used. Instantaneous Frequency (IF) and Instantaneous Amplitude (IA) estimates of a multi- component AM–FM signal, corrupted with additive white Gaussian noise of varying SNRs, are analyzed and results compared to ESA, DESA and Hilbert transform-based algorithms. SNR and MSE are used as figures of merit. Regularized BS version of EMD– ESA performs reasonably better in separating IA and IF components compared to the other methods from low to high SNR. Overall, obtained results illustrate the effective- ness of the proposed approach in terms of accuracy and robustness against noise to track IF and IA features of a multicomponent AM–FM signal

    "Body-In-The-Loop": Optimizing Device Parameters Using Measures of Instantaneous Energetic Cost

    Get PDF
    This paper demonstrates methods for the online optimization of assistive robotic devices such as powered prostheses, orthoses and exoskeletons. Our algorithms estimate the value of a physiological objective in real-time (with a body “in-the-loop”) and use this information to identify optimal device parameters. To handle sensor data that are noisy and dynamically delayed, we rely on a combination of dynamic estimation and response surface identification. We evaluated three algorithms (Steady-State Cost Mapping, Instantaneous Cost Mapping, and Instantaneous Cost Gradient Search) with eight healthy human subjects. Steady-State Cost Mapping is an established technique that fits a cubic polynomial to averages of steady-state measures at different parameter settings. The optimal parameter value is determined from the polynomial fit. Using a continuous sweep over a range of parameters and taking into account measurement dynamics, Instantaneous Cost Mapping identifies a cubic polynomial more quickly. Instantaneous Cost Gradient Search uses a similar technique to iteratively approach the optimal parameter value using estimates of the local gradient. To evaluate these methods in a simple and repeatable way, we prescribed step frequency via a metronome and optimized this frequency to minimize metabolic energetic cost. This use of step frequency allows a comparison of our results to established techniques and enables others to replicate our methods. Our results show that all three methods achieve similar accuracy in estimating optimal step frequency. For all methods, the average error between the predicted minima and the subjects’ preferred step frequencies was less than 1% with a standard deviation between 4% and 5%. Using Instantaneous Cost Mapping, we were able to reduce subject walking-time from over an hour to less than 10 minutes. While, for a single parameter, the Instantaneous Cost Gradient Search is not much faster than Steady-State Cost Mapping, the Instantaneous Cost Gradient Search extends favorably to multi-dimensional parameter spaces

    An improved algorithm for respiration signal extraction from electrocardiogram measured by conductive textile electrodes using instantaneous frequency estimation

    Get PDF
    In this paper, an improved algorithm for the extraction of respiration signal from the electrocardiogram (ECG) in home healthcare is proposed. The whole system consists of two-lead electrocardiogram acquisition using conductive textile electrodes located in bed, baseline fluctuation elimination, R-wave detection, adjustment of sudden change in R-wave area using moving average, and optimal lead selection. In order to solve the problems of previous algorithms for the ECG-derived respiration (EDR) signal acquisition, we are proposing a method for the optimal lead selection. An optimal EDR signal among the three EDR signals derived from each lead (and arctangent of their ratio) is selected by estimating the instantaneous frequency using the Hilbert transform, and then choosing the signal with minimum variation of the instantaneous frequency. The proposed algorithm was tested on 15 male subjects, and we obtained satisfactory respiration signals that showed high correlation (r2 > 0.8) with the signal acquired from the chest-belt respiration sensor

    Probabilistic Modeling Paradigms for Audio Source Separation

    Get PDF
    This is the author's final version of the article, first published as E. Vincent, M. G. Jafari, S. A. Abdallah, M. D. Plumbley, M. E. Davies. Probabilistic Modeling Paradigms for Audio Source Separation. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 7, pp. 162-185. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch007file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04Most sound scenes result from the superposition of several sources, which can be separately perceived and analyzed by human listeners. Source separation aims to provide machine listeners with similar skills by extracting the sounds of individual sources from a given scene. Existing separation systems operate either by emulating the human auditory system or by inferring the parameters of probabilistic sound models. In this chapter, the authors focus on the latter approach and provide a joint overview of established and recent models, including independent component analysis, local time-frequency models and spectral template-based models. They show that most models are instances of one of the following two general paradigms: linear modeling or variance modeling. They compare the merits of either paradigm and report objective performance figures. They also,conclude by discussing promising combinations of probabilistic priors and inference algorithms that could form the basis of future state-of-the-art systems

    Audio Source Separation Using Sparse Representations

    Get PDF
    This is the author's final version of the article, first published as A. Nesbit, M. G. Jafari, E. Vincent and M. D. Plumbley. Audio Source Separation Using Sparse Representations. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 10, pp. 246-264. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch010file: NesbitJafariVincentP11-audio.pdf:n\NesbitJafariVincentP11-audio.pdf:PDF owner: markp timestamp: 2011.02.04file: NesbitJafariVincentP11-audio.pdf:n\NesbitJafariVincentP11-audio.pdf:PDF owner: markp timestamp: 2011.02.04The authors address the problem of audio source separation, namely, the recovery of audio signals from recordings of mixtures of those signals. The sparse component analysis framework is a powerful method for achieving this. Sparse orthogonal transforms, in which only few transform coefficients differ significantly from zero, are developed; once the signal has been transformed, energy is apportioned from each transform coefficient to each estimated source, and, finally, the signal is reconstructed using the inverse transform. The overriding aim of this chapter is to demonstrate how this framework, as exemplified here by two different decomposition methods which adapt to the signal to represent it sparsely, can be used to solve different problems in different mixing scenarios. To address the instantaneous (neither delays nor echoes) and underdetermined (more sources than mixtures) mixing model, a lapped orthogonal transform is adapted to the signal by selecting a basis from a library of predetermined bases. This method is highly related to the windowing methods used in the MPEG audio coding framework. In considering the anechoic (delays but no echoes) and determined (equal number of sources and mixtures) mixing case, a greedy adaptive transform is used based on orthogonal basis functions that are learned from the observed data, instead of being selected from a predetermined library of bases. This is found to encode the signal characteristics, by introducing a feedback system between the bases and the observed data. Experiments on mixtures of speech and music signals demonstrate that these methods give good signal approximations and separation performance, and indicate promising directions for future research

    Comparison of techniques for estimating the frequency selectivity of bandlimited channels

    Get PDF
    A transmission channel used in application such as telecommunications can be modeled as a bandpass filter. Measurement of the frequency selectivity of the channel is important to ensure that the information-bearing signal has minimal distortion and loss of information. A comparison is made for several methods used for estimating the frequency selectivity of the transmission. The methods presented are the correlation method, instantaneous energy and frequency estimation and the cross Wigner-Ville distribution. The theoretical foundations and assumptions are described for each method. In general, all the methods gave similar performance in terms of the frequency selectivity. Due to the shorter analysis duration, both the instantaneous energy and frequency estimation and cross Wigner-Ville distribution is ideal for estimating the frequency selectivity of time-varying channel

    Differential fast fixed-point algorithms for underdetermined instantaneous and convolutive partial blind source separation

    Full text link
    This paper concerns underdetermined linear instantaneous and convolutive blind source separation (BSS), i.e., the case when the number of observed mixed signals is lower than the number of sources.We propose partial BSS methods, which separate supposedly nonstationary sources of interest (while keeping residual components for the other, supposedly stationary, "noise" sources). These methods are based on the general differential BSS concept that we introduced before. In the instantaneous case, the approach proposed in this paper consists of a differential extension of the FastICA method (which does not apply to underdetermined mixtures). In the convolutive case, we extend our recent time-domain fast fixed-point C-FICA algorithm to underdetermined mixtures. Both proposed approaches thus keep the attractive features of the FastICA and C-FICA methods. Our approaches are based on differential sphering processes, followed by the optimization of the differential nonnormalized kurtosis that we introduce in this paper. Experimental tests show that these differential algorithms are much more robust to noise sources than the standard FastICA and C-FICA algorithms.Comment: this paper describes our differential FastICA-like algorithms for linear instantaneous and convolutive underdetermined mixture

    A comparison of frequency estimation techniques for high-dynamic trajectories

    Get PDF
    A comparison is presented for four different estimation techniques applied to the problem of continuously estimating the parameters of a sinusoidal Global Positioning System (GPS) signal, observed in the presence of additive noise, under extremely high-dynamic conditions. Frequency estimates are emphasized, although phase and/or frequency rate are also estimated by some of the algorithms. These parameters are related to the velocity, position, and acceleration of the maneuvering transmitter. Estimated performance at low carrier-to-noise ratios and high dynamics is investigated for the purpose of determining the useful operating range of an approximate Maximum Likelihood (ML) estimator, an Extended Kalman Filter (EKF), a Cross-Product Automatic Frequency Control (CPAFC) loop, and a digital phase-locked loop (PPL). Numerical simulations are used to evaluate performance while tracking a common trajectory exhibiting high dynamics
    • …
    corecore