3 research outputs found

    Mean square performance evaluation in frequency domain for an improved adaptive feedback cancellation in hearing aids

    Get PDF
    We consider an adaptive linear prediction based feedback canceller for hearing aids that exploits two (an external and a shaped) noise signals for a bias-less adaptive estimation. In particular, the bias in the estimate of the feedback path is reduced by synthesizing the high-frequency spectrum of the reinforced signal using a shaped noise signal. Moreover, a second shaped (probe) noise signal is used to reduce the closed-loop signal correlation between the acoustic input and the loudspeaker signal at low frequencies. A power-transfer-function analysis of the system is provided, from which the effect of the system parameters and adaptive algorithms [normalized least mean square (NLMS) and recursive least square (RLS)] on the rate of convergence, the steady-state behaviour and the stability of the feedback canceller is explicitly found. The derived expressions are verified through computer simulations. It is found that, as compared to feedback canceller without probe noise, the cost of achieving an unbiased estimate of the feedback path using the feedback canceller with probe noise is a higher steady-state misadjustment for the RLS algorithm, whereas a slower convergence and a higher tracking error for the NLMS algorithm

    Error Bounds and Applications for Stochastic Approximation with Non-Decaying Gain

    Get PDF
    This work analyzes the stochastic approximation algorithm with non-decaying gains as applied in time-varying problems. The setting is to minimize a sequence of scalar-valued loss functions fk(⋅)f_k(\cdot) at sampling times τk\tau_k or to locate the root of a sequence of vector-valued functions gk(⋅)g_k(\cdot) at τk\tau_k with respect to a parameter θ∈Rp\theta\in R^p. The available information is the noise-corrupted observation(s) of either fk(⋅)f_k(\cdot) or gk(⋅)g_k(\cdot) evaluated at one or two design points only. Given the time-varying stochastic approximation setup, we apply stochastic approximation algorithms with non-decaying gains, so that the recursive estimate denoted as θ^k\hat{\theta}_k can maintain its momentum in tracking the time-varying optimum denoted as θk∗\theta_k^*. Chapter 3 provides a bound for the root-mean-squared error E(∥θ^k−θk∗∥2) \sqrt{E(\|\hat{\theta}_k-\theta_k^*\|^2}). Overall, the bounds are applicable under a mild assumption on the time-varying drift and a modest restriction on the observation noise and the bias term. After establishing the tracking capability in Chapter 3, we also discuss the concentration behavior of θ^k\hat{\theta}_k in Chapter 4. The weak convergence limit of the continuous interpolation of θ^k\hat{\theta}_k is shown to follow the trajectory of a non-autonomous ordinary differential equation. Both Chapter 3 and Chapter 4 are probabilistic arguments and may not provide much guidance on the gain-tuning strategies useful for one single experiment run. Therefore, Chapter 5 discusses a data-dependent gain-tuning strategy based on estimating the Hessian information and the noise level. Overall, this work answers the questions "what is the estimate for the dynamical system θk∗\theta_k^*" and "how much we can trust θ^k\hat{\theta}_k as an estimate for θk∗\theta_k^*."Comment: arXiv admin note: text overlap with arXiv:1906.0953
    corecore