5 research outputs found

    Perceptual Echo Control and Delay Estimation

    Get PDF

    Time delay estimation algoritms for echo cancellation

    Get PDF
    The following case study describes how to eliminate echo in a VoIP network using delay estimation algorithms. It is known that echo with long transmission delays becomes more noticeable to users. Thus, time delay estimation, as a part of echo cancellation, is an important topic during transmission of voice signals over packetswitching telecommunication systems. An echo delay problem associated with IP-based transport networks is discussed in the following text. The paper introduces the comparative study of time delay estimation algorithm, used for estimation of the true time delay between two speech signals. Experimental results of MATLab simulations that describe the performance of several methods based on cross-correlation, normalized crosscorrelation and generalized cross-correlation are also presented in the paper

    Partial Update Algorithms and Echo Delay Estimation

    No full text
    In this paper, we introduce methods for extracting an echo delay between speech signals using adaptive filtering algorithms. Time delay estimation is an initial step for many speech processing applications. Conventional techniques that estimate a time difference of arrival between two signals are based on the peak determination of the generalized cross-correlation between the signals. To achieve a good precision and stability in estimation, the input sequences have to be multiplied by an appropriate weighting function. Regularly, the weighting functions are dependent on the signals power spectra. The spectra are generally unknown and have to be estimated in advance. An implementation of the time delay estimation via the adaptive least mean squares is analogous to estimating the Roth generalized cross-correlation weighting function. The estimated parameters using the adaptive filter have a smaller variance, because it avoids the need for the spectrum estimation. In the following, we discuss proportionate and partial-update adaptive techniques and consider their performance in term of delay estimation

    Keratoconus Diagnostic and Treatment Algorithms Based on Machine-Learning Methods

    No full text
    The accurate diagnosis of keratoconus, especially in its early stages of development, allows one to utilise timely and proper treatment strategies for slowing the progression of the disease and provide visual rehabilitation. Various keratometry indices and classifications for quantifying the severity of keratoconus have been developed. Today, many of them involve the use of the latest methods of computer processing and data analysis. The main purpose of this work was to develop a machine-learning-based algorithm to precisely determine the stage of keratoconus, allowing optimal management of patients with this disease. A multicentre retrospective study was carried out to obtain a database of patients with keratoconus and to use machine-learning techniques such as principal component analysis and clustering. The created program allows for us to distinguish between a normal state; preclinical keratoconus; and stages 1, 2, 3 and 4 of the disease, with an accuracy in terms of the AUC of 0.95 to 1.00 based on keratotopographer readings, relative to the adapted Amsler–Krumeich algorithm. The predicted stage and additional diagnostic criteria were then used to create a standardised keratoconus management algorithm. We also developed a web-based interface for the algorithm, providing us the opportunity to use the software in a clinical environment
    corecore